Feb 18 13:59:26 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 13:59:26 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:26 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 13:59:27 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 13:59:27 crc kubenswrapper[4739]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.001550 4739 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008252 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008286 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008297 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008306 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008315 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008324 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008333 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008341 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008349 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008360 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008371 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008381 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008391 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008399 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008408 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008416 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008424 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008432 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008440 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008473 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008481 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008488 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008496 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008504 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008512 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008519 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008527 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008534 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008542 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008550 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008557 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008565 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008572 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008580 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008587 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008597 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008607 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008614 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008622 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008632 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008642 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008651 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008660 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008669 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008678 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008686 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008696 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008704 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008712 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008720 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008727 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008735 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008742 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008749 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008760 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008769 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008777 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008785 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008794 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008803 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008812 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008820 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008829 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008837 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008846 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008854 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008861 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008871 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008879 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008886 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.008895 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011234 4739 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011262 4739 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011277 4739 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011288 4739 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011300 4739 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011311 4739 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011322 4739 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011333 4739 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011342 4739 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011351 4739 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011362 4739 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011371 4739 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011380 4739 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011389 4739 flags.go:64] FLAG: --cgroup-root="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011397 4739 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011406 4739 flags.go:64] FLAG: --client-ca-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011415 4739 flags.go:64] FLAG: --cloud-config="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011423 4739 flags.go:64] FLAG: --cloud-provider="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011432 4739 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011478 4739 flags.go:64] FLAG: --cluster-domain="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011500 4739 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011519 4739 flags.go:64] FLAG: --config-dir="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011531 4739 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011541 4739 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011553 4739 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011565 4739 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011574 4739 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011583 4739 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011592 4739 flags.go:64] FLAG: --contention-profiling="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011601 4739 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011610 4739 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011619 4739 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011628 4739 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011639 4739 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011648 4739 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011656 4739 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011668 4739 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011677 4739 flags.go:64] FLAG: --enable-server="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011686 4739 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011698 4739 flags.go:64] FLAG: --event-burst="100" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011708 4739 flags.go:64] FLAG: --event-qps="50" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011747 4739 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011757 4739 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011766 4739 flags.go:64] FLAG: --eviction-hard="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011777 4739 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011786 4739 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011796 4739 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011805 4739 flags.go:64] FLAG: --eviction-soft="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011814 4739 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011823 4739 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011832 4739 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011843 4739 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011861 4739 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011883 4739 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011895 4739 flags.go:64] FLAG: --feature-gates="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011911 4739 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011923 4739 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011936 4739 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011947 4739 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011957 4739 flags.go:64] FLAG: --healthz-port="10248" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011966 4739 flags.go:64] FLAG: --help="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011976 4739 flags.go:64] FLAG: --hostname-override="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011985 4739 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.011994 4739 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012003 4739 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012012 4739 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012020 4739 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012029 4739 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012038 4739 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012046 4739 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012055 4739 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012064 4739 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012076 4739 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012085 4739 flags.go:64] FLAG: --kube-reserved="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012094 4739 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012102 4739 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012113 4739 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012123 4739 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012148 4739 flags.go:64] FLAG: --lock-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012162 4739 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012173 4739 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012186 4739 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012204 4739 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012216 4739 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012228 4739 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012238 4739 flags.go:64] FLAG: --logging-format="text" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012249 4739 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012261 4739 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012273 4739 flags.go:64] FLAG: --manifest-url="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012284 4739 flags.go:64] FLAG: --manifest-url-header="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012310 4739 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012322 4739 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012338 4739 flags.go:64] FLAG: --max-pods="110" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012348 4739 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012357 4739 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012366 4739 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012375 4739 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012385 4739 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012393 4739 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012402 4739 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012422 4739 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012431 4739 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012440 4739 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012484 4739 flags.go:64] FLAG: --pod-cidr="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012493 4739 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012505 4739 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012514 4739 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012523 4739 flags.go:64] FLAG: --pods-per-core="0" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012535 4739 flags.go:64] FLAG: --port="10250" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012544 4739 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012553 4739 flags.go:64] FLAG: --provider-id="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012561 4739 flags.go:64] FLAG: --qos-reserved="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012570 4739 flags.go:64] FLAG: --read-only-port="10255" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012581 4739 flags.go:64] FLAG: --register-node="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012598 4739 flags.go:64] FLAG: --register-schedulable="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012618 4739 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012640 4739 flags.go:64] FLAG: --registry-burst="10" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012651 4739 flags.go:64] FLAG: --registry-qps="5" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012662 4739 flags.go:64] FLAG: --reserved-cpus="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012673 4739 flags.go:64] FLAG: --reserved-memory="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012686 4739 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012698 4739 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012708 4739 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012719 4739 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012729 4739 flags.go:64] FLAG: --runonce="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012740 4739 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012751 4739 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012763 4739 flags.go:64] FLAG: --seccomp-default="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012773 4739 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012784 4739 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012795 4739 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012809 4739 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012820 4739 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012831 4739 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012841 4739 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012852 4739 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012864 4739 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012875 4739 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012886 4739 flags.go:64] FLAG: --system-cgroups="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012896 4739 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012914 4739 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012925 4739 flags.go:64] FLAG: --tls-cert-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012935 4739 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012949 4739 flags.go:64] FLAG: --tls-min-version="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012961 4739 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012972 4739 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.012991 4739 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013002 4739 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013013 4739 flags.go:64] FLAG: --v="2" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013028 4739 flags.go:64] FLAG: --version="false" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013042 4739 flags.go:64] FLAG: --vmodule="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013056 4739 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.013069 4739 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013313 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013328 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013340 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013352 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013362 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013372 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013382 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013392 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013403 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013412 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013422 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013433 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013477 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013488 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013498 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013507 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013517 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013527 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013537 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013547 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013556 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013565 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013574 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013584 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013594 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013607 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013617 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013630 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013640 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013650 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013660 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013670 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013680 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013689 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013700 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013710 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013720 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013730 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013740 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013750 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013763 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013773 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013781 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013789 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013797 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013806 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013817 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013826 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013836 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013845 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013856 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013867 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013876 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013884 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013895 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013904 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013912 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013921 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013929 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013937 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013945 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013953 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013960 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013970 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013977 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013985 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.013993 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014000 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014008 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014015 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.014023 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.014048 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.029977 4739 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.030056 4739 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030310 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030347 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030361 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030374 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030387 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030399 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030412 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030423 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030433 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030480 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030492 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030502 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030512 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030522 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030533 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030542 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030550 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030558 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030567 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030576 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030585 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030592 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030600 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030608 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030616 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030624 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030631 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030641 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030652 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030662 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030673 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030688 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030699 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030711 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030719 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030729 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030737 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030746 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030754 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030763 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030772 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030780 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030788 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030798 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030808 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030817 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030825 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030835 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030844 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030852 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030860 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030871 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030883 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030893 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030903 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030913 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030924 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030934 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030944 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030954 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030964 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030976 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.030987 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031030 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031041 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031051 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031062 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031072 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031081 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031091 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031100 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.031117 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031486 4739 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031512 4739 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031522 4739 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031536 4739 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031552 4739 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031564 4739 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031575 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031585 4739 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031595 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031605 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031615 4739 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031624 4739 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031634 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031646 4739 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031656 4739 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031666 4739 feature_gate.go:330] unrecognized feature gate: Example Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031675 4739 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031687 4739 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031696 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031733 4739 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031744 4739 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031753 4739 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031766 4739 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031779 4739 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031792 4739 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031803 4739 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031813 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031825 4739 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031836 4739 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031846 4739 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031856 4739 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031867 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031876 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031886 4739 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031895 4739 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031905 4739 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031915 4739 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031924 4739 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031934 4739 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031945 4739 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031955 4739 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031966 4739 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031976 4739 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031986 4739 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.031995 4739 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032005 4739 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032016 4739 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032026 4739 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032035 4739 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032045 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032055 4739 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032066 4739 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032076 4739 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032085 4739 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032096 4739 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032105 4739 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032116 4739 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032127 4739 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032137 4739 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032148 4739 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032158 4739 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032166 4739 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032174 4739 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032185 4739 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032192 4739 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032200 4739 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032208 4739 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032216 4739 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032224 4739 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032232 4739 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.032240 4739 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.032252 4739 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.037106 4739 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.053709 4739 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.053873 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074346 4739 server.go:997] "Starting client certificate rotation" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074411 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074610 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 01:32:44.06229505 +0000 UTC Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.074693 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.213506 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.216976 4739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.217272 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.238174 4739 log.go:25] "Validated CRI v1 runtime API" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.277809 4739 log.go:25] "Validated CRI v1 image API" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.280345 4739 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.286495 4739 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-13-54-48-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.286536 4739 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.308648 4739 manager.go:217] Machine: {Timestamp:2026-02-18 13:59:28.305111311 +0000 UTC m=+0.800832283 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d786f2bd-7712-4d82-a689-cbffdaab4e85 BootID:90b9be3f-f663-4169-ae17-5b48d37fe9e4 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:18:7d:56 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:18:7d:56 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:37:23:03 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:82:a1:66 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:aa:7e:f5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ce:fd:d8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c6:33:01:0d:97:a1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:dc:09:a3:cb:e2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.309023 4739 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.309212 4739 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.310913 4739 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311179 4739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311237 4739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311658 4739 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.311678 4739 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.312168 4739 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.312216 4739 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.313031 4739 state_mem.go:36] "Initialized new in-memory state store" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.313188 4739 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.317914 4739 kubelet.go:418] "Attempting to sync node with API server" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.317971 4739 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.318032 4739 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.318050 4739 kubelet.go:324] "Adding apiserver pod source" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.318065 4739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.322625 4739 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.323482 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.323487 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.323705 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.323727 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.323889 4739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.327739 4739 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329926 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329965 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329979 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.329993 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330015 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330028 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330040 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330062 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330076 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330092 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330135 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.330148 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.331642 4739 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.332591 4739 server.go:1280] "Started kubelet" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.333606 4739 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.333620 4739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.334202 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.334363 4739 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 13:59:28 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.340544 4739 server.go:460] "Adding debug handlers to kubelet server" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.342603 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.344752 4739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.345431 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:33:20.828108439 +0000 UTC Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.345811 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.346033 4739 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.346084 4739 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.346268 4739 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347420 4739 factory.go:55] Registering systemd factory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347475 4739 factory.go:221] Registration of the systemd container factory successfully Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.345591 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18955bfc775648fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 13:59:28.332187902 +0000 UTC m=+0.827908864,LastTimestamp:2026-02-18 13:59:28.332187902 +0000 UTC m=+0.827908864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347922 4739 factory.go:153] Registering CRI-O factory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.347941 4739 factory.go:221] Registration of the crio container factory successfully Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348036 4739 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348069 4739 factory.go:103] Registering Raw factory Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348095 4739 manager.go:1196] Started watching for new ooms in manager Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.348197 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.348280 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.348876 4739 manager.go:319] Starting recovery of all containers Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.346328 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357049 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357113 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357136 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357154 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357174 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357193 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357211 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357229 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357249 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357269 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357290 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357310 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357331 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357353 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357373 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357394 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357413 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357433 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357561 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357582 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357601 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357639 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357659 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357680 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357702 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357723 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357747 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357770 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357790 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357809 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357828 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357848 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357867 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357886 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357905 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357923 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357943 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357963 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.357983 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358003 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358022 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358043 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358062 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358083 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358103 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358123 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358143 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358162 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358182 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358201 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358231 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358252 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358280 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358305 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358328 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358350 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358371 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358392 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358410 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358430 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358477 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358497 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358518 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358538 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358559 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358580 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358599 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358617 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358637 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358657 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358676 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358695 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358715 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358736 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358755 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358775 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358794 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358813 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358834 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358854 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358873 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358892 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358919 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358937 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358956 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358976 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.358995 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359013 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359032 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359051 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359071 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359091 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359110 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359129 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359149 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359170 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359190 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359208 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359227 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359247 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359266 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.359286 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363367 4739 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363397 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363414 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363435 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363468 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363481 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363494 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363509 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363521 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363535 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363568 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363590 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363604 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363616 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363628 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363641 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363652 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363663 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363675 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363686 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363698 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363710 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363732 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363745 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363756 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363769 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363781 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363793 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363806 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363819 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363830 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363842 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363855 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363867 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363879 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363891 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363903 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363915 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363927 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363940 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363952 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363964 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363976 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.363987 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364000 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364022 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364034 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364046 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364057 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364069 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364080 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364091 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364105 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364119 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364131 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364142 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364154 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364167 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364179 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364191 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364203 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364214 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364225 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364286 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364298 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364310 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364325 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364337 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364349 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364360 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364371 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364383 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364393 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364405 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364416 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364427 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364438 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364465 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364476 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364486 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364498 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364695 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364707 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364718 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364731 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364742 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364754 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364765 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364776 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364789 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364800 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364811 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364822 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364833 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364844 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364855 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364867 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364878 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364891 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364902 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364912 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364923 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364935 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364946 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364957 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364969 4739 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364980 4739 reconstruct.go:97] "Volume reconstruction finished" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.364988 4739 reconciler.go:26] "Reconciler: start to sync state" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.384184 4739 manager.go:324] Recovery completed Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.399126 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.400551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.400598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.400615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.401778 4739 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.401797 4739 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.401818 4739 state_mem.go:36] "Initialized new in-memory state store" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.407196 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.409027 4739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.409074 4739 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.409107 4739 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.409156 4739 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 13:59:28 crc kubenswrapper[4739]: W0218 13:59:28.409690 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.409774 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.419093 4739 policy_none.go:49] "None policy: Start" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.420421 4739 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.420498 4739 state_mem.go:35] "Initializing new in-memory state store" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.446542 4739 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473110 4739 manager.go:334] "Starting Device Plugin manager" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473209 4739 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473227 4739 server.go:79] "Starting device plugin registration server" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473771 4739 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.473795 4739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.474177 4739 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.474285 4739 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.474302 4739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.483186 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.510066 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.510187 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.511724 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512250 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512833 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.512981 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513052 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.513989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514066 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514337 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514384 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514828 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.514840 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.515863 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516017 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516089 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.516807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517149 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.517217 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.519045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.519103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.519127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.549690 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566702 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566763 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566809 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566934 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566976 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.566997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.567109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.574709 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.575761 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.576100 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669095 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669390 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669702 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669744 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669785 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669805 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669844 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669867 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.669885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670237 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670264 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670291 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670200 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670280 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.670646 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.776264 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.778361 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.778930 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.845643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.863173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.871684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.891616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: I0218 13:59:28.900193 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:28 crc kubenswrapper[4739]: E0218 13:59:28.951677 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.029053 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef WatchSource:0}: Error finding container 181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef: Status 404 returned error can't find the container with id 181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.043553 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277 WatchSource:0}: Error finding container 0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277: Status 404 returned error can't find the container with id 0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277 Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.046554 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22 WatchSource:0}: Error finding container ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22: Status 404 returned error can't find the container with id ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22 Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.049587 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032 WatchSource:0}: Error finding container c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032: Status 404 returned error can't find the container with id c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032 Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.057013 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e WatchSource:0}: Error finding container 20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e: Status 404 returned error can't find the container with id 20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.138429 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.138605 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.180050 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181334 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.181394 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.181846 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.187557 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.187648 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.335665 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.346696 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:52:48.334024991 +0000 UTC Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.413683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c4833d54dcb4d3996c8ce252ebab0796b3efe1a383e0cbfd77132e6dfbf0e032"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.414532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"20ac3626e41e08d4a05e641e31454596237ebfe83aa9ce34fb19b5734377ca4e"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.415466 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea43acf2ee2d50d21b0de9a779908635ddbd10b93b78d4200a169b41893d0e22"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.416720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0ea15d579cf084726c893946b2ac4a200346b512325791c9c192e647374da277"} Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.417599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"181c50411a4a02654fb2be76624f023c3fb982f5568934db55f9cb48f65482ef"} Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.455731 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.455829 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: W0218 13:59:29.586654 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.586748 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.752689 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.982643 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984804 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984822 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:29 crc kubenswrapper[4739]: I0218 13:59:29.984857 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:29 crc kubenswrapper[4739]: E0218 13:59:29.985432 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.334241 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 13:59:30 crc kubenswrapper[4739]: E0218 13:59:30.335215 4739 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.335252 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.347814 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:53:39.705498647 +0000 UTC Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.424202 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.424363 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.424354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.425888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.425943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.425962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.428264 4739 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.428317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.428397 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.431584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.431636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.431654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.435297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.435356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.435405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.438120 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.438206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.438256 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.439638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.439689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.439714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.441315 4739 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d" exitCode=0 Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.441379 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d"} Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.441495 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.443855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.443899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.443917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.445400 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.448331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.448393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:30 crc kubenswrapper[4739]: I0218 13:59:30.448418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: W0218 13:59:31.196368 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.196497 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.335178 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.348353 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:55:57.882146073 +0000 UTC Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.353798 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.446646 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.446638 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.447578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.447614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.447622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.448494 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5" exitCode=0 Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.448553 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.448630 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.450216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.450240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.450249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453274 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453289 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453309 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.453969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455277 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455912 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.455965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457732 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e"} Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.457753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc"} Feb 18 13:59:31 crc kubenswrapper[4739]: W0218 13:59:31.563219 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.563286 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.586014 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587115 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.587178 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:31 crc kubenswrapper[4739]: E0218 13:59:31.587579 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.647897 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.658062 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:31 crc kubenswrapper[4739]: I0218 13:59:31.715911 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.349416 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:35:10.631636996 +0000 UTC Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.462731 4739 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85" exitCode=0 Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.462819 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85"} Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.462986 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.464311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.464346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.464362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469066 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469217 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469491 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469601 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.469077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8"} Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.471669 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.474590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.476156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.476223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.476240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:32 crc kubenswrapper[4739]: I0218 13:59:32.788720 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.349772 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:51:11.172308205 +0000 UTC Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.477564 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478240 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478290 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b"} Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478428 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478467 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478512 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478529 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.478923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479802 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.479817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:33 crc kubenswrapper[4739]: I0218 13:59:33.544320 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.350067 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:34:12.212399689 +0000 UTC Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.416023 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.438115 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.488799 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.488889 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.489658 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9"} Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490618 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.490839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.788289 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:34 crc kubenswrapper[4739]: I0218 13:59:34.790303 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.351808 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:31:36.428735494 +0000 UTC Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.491498 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.491522 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.492705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.492744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.492757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.493010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.493064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:35 crc kubenswrapper[4739]: I0218 13:59:35.493082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:36 crc kubenswrapper[4739]: I0218 13:59:36.352752 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:04:26.685149407 +0000 UTC Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.353642 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:32:15.765564484 +0000 UTC Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.721699 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.721907 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.724476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.724525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:37 crc kubenswrapper[4739]: I0218 13:59:37.724540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.000701 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.354381 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:18:25.834287587 +0000 UTC Feb 18 13:59:38 crc kubenswrapper[4739]: E0218 13:59:38.483275 4739 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.500134 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.501301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.501352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:38 crc kubenswrapper[4739]: I0218 13:59:38.501364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.354562 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:43:26.023243226 +0000 UTC Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.383793 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.384108 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.385652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.385680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:39 crc kubenswrapper[4739]: I0218 13:59:39.385689 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.355279 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:49:37.419190428 +0000 UTC Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.779973 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.780207 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.781691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.781730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:40 crc kubenswrapper[4739]: I0218 13:59:40.781742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:41 crc kubenswrapper[4739]: I0218 13:59:41.001716 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 13:59:41 crc kubenswrapper[4739]: I0218 13:59:41.001807 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 13:59:41 crc kubenswrapper[4739]: I0218 13:59:41.355953 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:14:13.275266949 +0000 UTC Feb 18 13:59:42 crc kubenswrapper[4739]: W0218 13:59:42.097329 4739 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.097501 4739 trace.go:236] Trace[1638822830]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 13:59:32.095) (total time: 10001ms): Feb 18 13:59:42 crc kubenswrapper[4739]: Trace[1638822830]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:59:42.097) Feb 18 13:59:42 crc kubenswrapper[4739]: Trace[1638822830]: [10.001458562s] [10.001458562s] END Feb 18 13:59:42 crc kubenswrapper[4739]: E0218 13:59:42.097541 4739 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.336227 4739 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.356697 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:37:36.90987619 +0000 UTC Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.367517 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.367582 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.377758 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.377834 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.796294 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]log ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]etcd ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/generic-apiserver-start-informers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/priority-and-fairness-filter ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-apiextensions-informers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-apiextensions-controllers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/crd-informer-synced ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-system-namespaces-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 18 13:59:42 crc kubenswrapper[4739]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/bootstrap-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/start-kube-aggregator-informers ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-registration-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-discovery-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]autoregister-completion ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-openapi-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 18 13:59:42 crc kubenswrapper[4739]: livez check failed Feb 18 13:59:42 crc kubenswrapper[4739]: I0218 13:59:42.796377 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 13:59:43 crc kubenswrapper[4739]: I0218 13:59:43.357557 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:18:34.184837643 +0000 UTC Feb 18 13:59:44 crc kubenswrapper[4739]: I0218 13:59:44.358131 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:32:47.810128731 +0000 UTC Feb 18 13:59:45 crc kubenswrapper[4739]: I0218 13:59:45.358499 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:37:32.81525346 +0000 UTC Feb 18 13:59:46 crc kubenswrapper[4739]: I0218 13:59:46.193354 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:46 crc kubenswrapper[4739]: I0218 13:59:46.359332 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:48:53.358970116 +0000 UTC Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.359956 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:42:23.921281367 +0000 UTC Feb 18 13:59:47 crc kubenswrapper[4739]: E0218 13:59:47.361056 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363733 4739 trace.go:236] Trace[1416376946]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 13:59:35.680) (total time: 11682ms): Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[1416376946]: ---"Objects listed" error: 11682ms (13:59:47.363) Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[1416376946]: [11.682726099s] [11.682726099s] END Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363782 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363823 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363906 4739 trace.go:236] Trace[672147440]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 13:59:32.486) (total time: 14876ms): Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[672147440]: ---"Objects listed" error: 14876ms (13:59:47.363) Feb 18 13:59:47 crc kubenswrapper[4739]: Trace[672147440]: [14.876959033s] [14.876959033s] END Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.363937 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.364580 4739 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 13:59:47 crc kubenswrapper[4739]: E0218 13:59:47.367323 4739 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.369440 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587591 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54350->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587654 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54350->192.168.126.11:17697: read: connection reset by peer" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587904 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54358->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.587972 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54358->192.168.126.11:17697: read: connection reset by peer" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.726212 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.795656 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.796331 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.796468 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 13:59:47 crc kubenswrapper[4739]: I0218 13:59:47.800191 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.242895 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.331912 4739 apiserver.go:52] "Watching apiserver" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.334510 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.334799 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335141 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.335207 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335676 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.335776 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.335912 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.336376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.336494 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.336615 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.337331 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.337747 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.337861 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.338064 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.338102 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.339042 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.339144 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.339169 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.340044 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.347656 4739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.360722 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:56:35.297977626 +0000 UTC Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372091 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372140 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372166 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372287 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372312 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372376 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372435 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372485 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372658 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372687 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372749 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372812 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372844 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372874 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372907 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372938 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.372997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373326 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373366 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373400 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373491 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373521 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373552 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373581 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373660 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373686 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373762 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373892 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.373941 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374086 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374370 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374459 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374546 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374566 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374585 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374631 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374649 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374666 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374685 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374707 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374728 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374747 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374765 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374782 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374814 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374832 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374856 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374872 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374888 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374930 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374970 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374994 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375021 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375042 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375089 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375111 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375133 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375213 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375398 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375466 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375490 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375512 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375534 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375587 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375607 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375628 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375671 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375710 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375732 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375755 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375775 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375824 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375845 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375890 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375953 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375973 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376002 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376022 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377045 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377070 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377120 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377143 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377168 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377190 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377214 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377247 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377269 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377294 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377318 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377367 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377391 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377435 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377500 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377525 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377618 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377642 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377665 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377715 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377737 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377765 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377789 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377814 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377839 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377863 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377886 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377909 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378005 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378033 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374624 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.374974 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375131 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375419 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375518 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375573 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375617 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375777 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.375839 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376045 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376085 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376332 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376388 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.376695 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377134 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377234 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377245 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377468 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377659 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378565 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377678 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378611 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.377811 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378169 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378665 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378892 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379049 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379151 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379348 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379547 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.379763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380521 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380544 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380676 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.380692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381604 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381637 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381656 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381626 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381701 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381755 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381891 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381956 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.381954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382292 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382702 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.382886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383115 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383709 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.383774 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.383832 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.883812188 +0000 UTC m=+21.379533130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.378186 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384652 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384741 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384787 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384852 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384877 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384904 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384947 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.384990 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385027 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385179 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385207 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385247 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385304 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385321 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385412 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385438 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385472 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385478 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385547 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385565 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385588 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385754 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385783 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385787 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385810 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385837 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385816 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385877 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.385886 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386009 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386304 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386338 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386409 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386505 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386539 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387125 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387775 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387812 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387854 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387893 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387932 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387971 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388005 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388081 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390082 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390113 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390133 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390153 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390193 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390213 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390232 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390250 4739 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390270 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390290 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390308 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390327 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390346 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390364 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390383 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390403 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390421 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390440 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390485 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390506 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390525 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390544 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390563 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390581 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390601 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390621 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390639 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390658 4739 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390677 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390695 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390714 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390733 4739 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390753 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390774 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390792 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390811 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390830 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390852 4739 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390871 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390890 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390908 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390927 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390945 4739 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390963 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390985 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391003 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391021 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391039 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391058 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391077 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391096 4739 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391115 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391170 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391190 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391209 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391229 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391248 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391269 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391287 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391306 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391326 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391346 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391365 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391384 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391402 4739 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391420 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391467 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391487 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391506 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391525 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391546 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391565 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391584 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391601 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391621 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391640 4739 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391658 4739 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391677 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391696 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391714 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391734 4739 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391755 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391774 4739 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391796 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391815 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391832 4739 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391852 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391871 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386187 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386238 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.393248 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386275 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386454 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386638 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386650 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386655 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386808 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387050 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386561 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.386992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.387795 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388226 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388339 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388496 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388554 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.388654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389184 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.389627 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390026 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390189 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390152 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390274 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390520 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.390614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391100 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391149 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391164 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391174 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391198 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.391976 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392050 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392117 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392141 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392363 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392439 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392506 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392621 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.392870 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.393327 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.393988 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396227 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396900 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.396633 4739 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.397908 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.397981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.398033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398097 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398147 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.898130933 +0000 UTC m=+21.393851855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398238 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.398357 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.898323188 +0000 UTC m=+21.394044320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.398585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.398973 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.402111 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.402310 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.402507 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.406539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.406769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.407018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.407688 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.407935 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408395 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408750 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.408827 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409183 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409213 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409396 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.409483 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.412922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.413514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.413675 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.414277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414387 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414410 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414423 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.414516 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.914491158 +0000 UTC m=+21.410212160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.414601 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.414746 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.416001 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416693 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416726 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416743 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.416582 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.416803 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:48.916781323 +0000 UTC m=+21.412502255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.417236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.417365 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.417713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418338 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418733 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418816 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418842 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.418861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.419212 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.419357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.419798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.420143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.420392 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.420912 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.421172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.421492 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.422791 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.423400 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.423980 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.425313 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.425884 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.426707 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.426863 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.427479 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.429145 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.429217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.430293 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.430608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.430707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431236 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431646 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.431990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.432612 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.433546 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.433722 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.435810 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.437277 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.438711 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.438881 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.439315 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.440764 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.441423 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.444098 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.445371 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.446306 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.447997 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.449319 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.450319 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.451437 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.452336 4739 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.452517 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.455358 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.456102 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.456775 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.457401 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.459372 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.460926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.460978 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.461311 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.462036 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.464170 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.465247 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.465772 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.466895 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.467564 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.468680 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.469207 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.470215 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.470412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.470842 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.472693 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.473268 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.474225 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.474699 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.475768 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.476330 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.476795 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.488477 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.494794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.494906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495019 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495041 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495059 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495081 4739 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495099 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495115 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495131 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495147 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495164 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495180 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495196 4739 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495213 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495229 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495245 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495262 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495277 4739 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495291 4739 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495307 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495323 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495338 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495353 4739 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495368 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495384 4739 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495401 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495416 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495432 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495469 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495484 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495498 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495513 4739 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495528 4739 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495545 4739 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495560 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495577 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495592 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495605 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495621 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495638 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495654 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495668 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495683 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495698 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495713 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495728 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495743 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495759 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495775 4739 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495791 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495806 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495821 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495835 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495851 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495867 4739 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495882 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495897 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495912 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495927 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495943 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495959 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495975 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.495990 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496004 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496027 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496043 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496058 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496073 4739 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496088 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496103 4739 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496118 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496134 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496149 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496164 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496179 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496195 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496212 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496239 4739 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496255 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496271 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496286 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496302 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496317 4739 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496332 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496348 4739 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496363 4739 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496377 4739 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496391 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496405 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496420 4739 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496435 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496469 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496485 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496500 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496515 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496532 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496553 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496571 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496588 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496604 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496620 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496635 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496650 4739 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496666 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496684 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496700 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496716 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.496731 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.497759 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.499228 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.501592 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.518141 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.528604 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.529906 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.530672 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8" exitCode=255 Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.530749 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8"} Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.535189 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.536747 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.536788 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.536994 4739 scope.go:117] "RemoveContainer" containerID="8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.540463 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.550170 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.563859 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.578316 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.589389 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.598198 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.608351 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.621029 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.631633 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.644609 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.664278 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.664506 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.665646 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.673159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.685490 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.707097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.724053 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.901100 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.901203 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: I0218 13:59:48.901259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901403 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901495 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901527 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:49.901506898 +0000 UTC m=+22.397227830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901551 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:49.901538339 +0000 UTC m=+22.397259261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:48 crc kubenswrapper[4739]: E0218 13:59:48.901570 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:49.90156001 +0000 UTC m=+22.397280932 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.002572 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.002629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002765 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002785 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002797 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002834 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002875 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002892 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002853 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:50.002837001 +0000 UTC m=+22.498557923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.002989 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:50.002956364 +0000 UTC m=+22.498677476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.360831 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:49:39.410509504 +0000 UTC Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.534214 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"916e8f95206be7dd9856b3f6fe2498277be5c1b911a349bfcbfef0acce91881c"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.535755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.535792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"273f18efd8e25c48124c4936031339dd4aeff5030e9a2a2a97203bf534b02802"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.537680 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.537838 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.537930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"99d2370b0ab8bca0dbc31de2ec404ccc2969b9becd4a7f878ce9d6eca641de44"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.540412 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.542705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db"} Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.558485 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.569960 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.579516 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.601254 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.616022 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.631532 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.642722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.654574 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.664690 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.676530 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.688230 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.701617 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.716105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.726084 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.735054 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.744037 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.912792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.912874 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:49 crc kubenswrapper[4739]: I0218 13:59:49.912915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913022 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913072 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:51.913034273 +0000 UTC m=+24.408755235 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913125 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:51.913105924 +0000 UTC m=+24.408826886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913134 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:49 crc kubenswrapper[4739]: E0218 13:59:49.913242 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:51.913215387 +0000 UTC m=+24.408936349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.014040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.014120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014227 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014235 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014281 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014242 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014296 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014309 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014360 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:52.014346745 +0000 UTC m=+24.510067667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.014374 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:52.014368376 +0000 UTC m=+24.510089298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.361784 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:01:45.614718423 +0000 UTC Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.410500 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.410768 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.410941 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.410767 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.410574 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:50 crc kubenswrapper[4739]: E0218 13:59:50.411071 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.421827 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.422554 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.424082 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.424770 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.426031 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.426580 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.545615 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.831667 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.849318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.851284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.854020 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.869004 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.884424 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.898700 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.917894 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.932415 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.944280 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.955658 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.971704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:50 crc kubenswrapper[4739]: I0218 13:59:50.986672 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.000319 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:50Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.016882 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.030273 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.050784 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.074041 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.088181 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.101778 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.362260 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:20:34.706136579 +0000 UTC Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.550125 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978"} Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.559588 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.575217 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.625243 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.639236 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.651790 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.662678 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.674505 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.686486 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.699288 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.711759 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:51Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.928915 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.929003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:51 crc kubenswrapper[4739]: I0218 13:59:51.929060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929097 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 13:59:55.929066124 +0000 UTC m=+28.424787056 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929207 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929225 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929272 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:55.929252948 +0000 UTC m=+28.424973920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:51 crc kubenswrapper[4739]: E0218 13:59:51.929294 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:55.929285239 +0000 UTC m=+28.425006291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.030283 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.030400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030525 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030574 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030598 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030601 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030632 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030653 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030690 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:56.030663273 +0000 UTC m=+28.526384235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.030726 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 13:59:56.030708084 +0000 UTC m=+28.526429046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.362380 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:40:47.029461354 +0000 UTC Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.410052 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.410114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:52 crc kubenswrapper[4739]: I0218 13:59:52.410189 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.410314 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.410416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:52 crc kubenswrapper[4739]: E0218 13:59:52.410519 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.362738 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:04:36.04391963 +0000 UTC Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.767493 4739 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.769913 4739 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.796689 4739 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.796792 4739 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.797975 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.823420 4739 csr.go:261] certificate signing request csr-w9vpp is approved, waiting to be issued Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.845046 4739 csr.go:257] certificate signing request csr-w9vpp is issued Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.890501 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.895275 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.910021 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.910578 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mdk59"] Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.910873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.913420 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.913755 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917330 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.917521 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.929786 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.936122 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939824 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.939836 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.954767 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.961558 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.965198 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.974393 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:53 crc kubenswrapper[4739]: E0218 13:59:53.974605 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.975953 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.975980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.975990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.976006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.976019 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:53Z","lastTransitionTime":"2026-02-18T13:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:53 crc kubenswrapper[4739]: I0218 13:59:53.980820 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:53Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.002803 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.019145 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.033415 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.044876 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.047136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-hosts-file\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.047201 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csts\" (UniqueName: \"kubernetes.io/projected/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-kube-api-access-6csts\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.056623 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.069502 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.077917 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.085383 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.148016 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csts\" (UniqueName: \"kubernetes.io/projected/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-kube-api-access-6csts\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.148063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-hosts-file\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.148127 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-hosts-file\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.165342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csts\" (UniqueName: \"kubernetes.io/projected/ef364cd3-8b0e-4ebb-96a9-f660f4dd776a-kube-api-access-6csts\") pod \"node-resolver-mdk59\" (UID: \"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\") " pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180459 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180467 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.180490 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.222546 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mdk59" Feb 18 13:59:54 crc kubenswrapper[4739]: W0218 13:59:54.235865 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef364cd3_8b0e_4ebb_96a9_f660f4dd776a.slice/crio-0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a WatchSource:0}: Error finding container 0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a: Status 404 returned error can't find the container with id 0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.282849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.323054 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h9slg"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.323952 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.326574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.326951 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.330004 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.330400 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.333728 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.363015 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:40:22.215901014 +0000 UTC Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.371220 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.390246 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.396727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.409566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:54 crc kubenswrapper[4739]: E0218 13:59:54.409671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.409911 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:54 crc kubenswrapper[4739]: E0218 13:59:54.409962 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.409986 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:54 crc kubenswrapper[4739]: E0218 13:59:54.410117 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.418768 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.439381 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.448981 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-cni-binary-copy\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-system-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450333 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-os-release\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-hostroot\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450376 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsrwf\" (UniqueName: \"kubernetes.io/projected/ec8fd6de-f77b-48a7-848f-a1b94e866365-kube-api-access-lsrwf\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450421 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-kubelet\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-etc-kubernetes\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450497 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-bin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450527 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-multus-certs\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450548 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-cnibin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450567 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-k8s-cni-cncf-io\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-netns\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-conf-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-socket-dir-parent\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-multus\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.450688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-daemon-config\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.468964 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.483363 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492276 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492285 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.492307 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.499991 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.518514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.547860 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551160 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-socket-dir-parent\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-multus\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-daemon-config\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551256 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-cni-binary-copy\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-system-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-os-release\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-multus\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-hostroot\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551382 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-hostroot\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsrwf\" (UniqueName: \"kubernetes.io/projected/ec8fd6de-f77b-48a7-848f-a1b94e866365-kube-api-access-lsrwf\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-os-release\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551470 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-system-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551437 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551533 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-socket-dir-parent\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-kubelet\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551502 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-cni-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551570 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-kubelet\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-etc-kubernetes\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551638 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-etc-kubernetes\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-bin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-multus-certs\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-var-lib-cni-bin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-cnibin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551726 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-multus-certs\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-k8s-cni-cncf-io\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551764 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-netns\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-k8s-cni-cncf-io\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551791 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-host-run-netns\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-conf-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551790 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-conf-dir\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.551792 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec8fd6de-f77b-48a7-848f-a1b94e866365-cnibin\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.552028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-cni-binary-copy\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.552293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ec8fd6de-f77b-48a7-848f-a1b94e866365-multus-daemon-config\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.558762 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mdk59" event={"ID":"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a","Type":"ContainerStarted","Data":"b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.558800 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mdk59" event={"ID":"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a","Type":"ContainerStarted","Data":"0a6f4cabed43e26c586da8fdd4c7f4c8e5f03039f28fe82573bf502745b6785a"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.569968 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsrwf\" (UniqueName: \"kubernetes.io/projected/ec8fd6de-f77b-48a7-848f-a1b94e866365-kube-api-access-lsrwf\") pod \"multus-h9slg\" (UID: \"ec8fd6de-f77b-48a7-848f-a1b94e866365\") " pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.576741 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594028 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.594107 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.597581 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.617316 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.634150 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.636613 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h9slg" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.650750 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.665948 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.680954 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.695827 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.701031 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.722169 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.740895 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.754628 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ltvvj"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.755584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757174 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mc7b4"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757593 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.757813 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.758977 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.760076 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.763884 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.764115 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.764664 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766298 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766538 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766677 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.766972 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767094 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767227 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767669 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767897 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.767897 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.784850 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798472 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.798483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.803276 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.833832 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.846342 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 13:54:53 +0000 UTC, rotation deadline is 2026-12-03 00:27:54.208291201 +0000 UTC Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.846396 4739 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6898h27m59.361897159s for next certificate rotation Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.851494 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855893 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-os-release\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855912 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.855944 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-875sv\" (UniqueName: \"kubernetes.io/projected/617869cd-510c-4491-a8f7-1a7bb2656f26-kube-api-access-875sv\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856032 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/947a1bc9-4557-4cd9-aa90-9d3893aad914-mcd-auth-proxy-config\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-system-cni-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-cnibin\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856222 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/947a1bc9-4557-4cd9-aa90-9d3893aad914-proxy-tls\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856244 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856295 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856368 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856389 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856506 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856586 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/947a1bc9-4557-4cd9-aa90-9d3893aad914-rootfs\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856606 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-binary-copy\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856629 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8p7\" (UniqueName: \"kubernetes.io/projected/947a1bc9-4557-4cd9-aa90-9d3893aad914-kube-api-access-hn8p7\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856652 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856696 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856762 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.856792 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.862146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.880748 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.900692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:54Z","lastTransitionTime":"2026-02-18T13:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.901681 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.915721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.946397 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957804 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/947a1bc9-4557-4cd9-aa90-9d3893aad914-rootfs\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-binary-copy\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8p7\" (UniqueName: \"kubernetes.io/projected/947a1bc9-4557-4cd9-aa90-9d3893aad914-kube-api-access-hn8p7\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957947 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957963 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957979 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.957992 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958006 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958035 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958067 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-os-release\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-875sv\" (UniqueName: \"kubernetes.io/projected/617869cd-510c-4491-a8f7-1a7bb2656f26-kube-api-access-875sv\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958130 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/947a1bc9-4557-4cd9-aa90-9d3893aad914-mcd-auth-proxy-config\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958160 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-system-cni-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-cnibin\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/947a1bc9-4557-4cd9-aa90-9d3893aad914-proxy-tls\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958285 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.958897 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.959373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.959620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/947a1bc9-4557-4cd9-aa90-9d3893aad914-rootfs\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-binary-copy\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960697 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960749 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960806 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.960829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961201 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/617869cd-510c-4491-a8f7-1a7bb2656f26-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961240 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961582 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-system-cni-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961637 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-os-release\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961672 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961706 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961695 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961747 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961833 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-cnibin\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961850 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/617869cd-510c-4491-a8f7-1a7bb2656f26-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.961873 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.962245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/947a1bc9-4557-4cd9-aa90-9d3893aad914-mcd-auth-proxy-config\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.965553 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.966172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/947a1bc9-4557-4cd9-aa90-9d3893aad914-proxy-tls\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.966509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.980115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"ovnkube-node-x4j94\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.980318 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8p7\" (UniqueName: \"kubernetes.io/projected/947a1bc9-4557-4cd9-aa90-9d3893aad914-kube-api-access-hn8p7\") pod \"machine-config-daemon-mc7b4\" (UID: \"947a1bc9-4557-4cd9-aa90-9d3893aad914\") " pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.981752 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.982099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-875sv\" (UniqueName: \"kubernetes.io/projected/617869cd-510c-4491-a8f7-1a7bb2656f26-kube-api-access-875sv\") pod \"multus-additional-cni-plugins-ltvvj\" (UID: \"617869cd-510c-4491-a8f7-1a7bb2656f26\") " pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:54 crc kubenswrapper[4739]: I0218 13:59:54.997412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:54Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.002929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.014607 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.026356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.040496 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.057631 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.066870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.074238 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.079724 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 13:59:55 crc kubenswrapper[4739]: W0218 13:59:55.085135 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947a1bc9_4557_4cd9_aa90_9d3893aad914.slice/crio-86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747 WatchSource:0}: Error finding container 86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747: Status 404 returned error can't find the container with id 86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747 Feb 18 13:59:55 crc kubenswrapper[4739]: W0218 13:59:55.101348 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf04e1fa3_4bb9_41e9_bf1d_a2862fb63224.slice/crio-994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5 WatchSource:0}: Error finding container 994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5: Status 404 returned error can't find the container with id 994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5 Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.104653 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.104780 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.104867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.105084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.105149 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.208399 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.311582 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.363529 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:13:56.267082403 +0000 UTC Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.415990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.416002 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.518668 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.564161 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" exitCode=0 Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.564223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.564248 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.566040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.566082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"d88ffc1d0a6f92570ad7561edcb514a76ecb11d8d9b6417ba255e803be63ca80"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.567923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.567969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"059d35ee1e8ad1f1ba1bb06bc8bad03ac79364e9a893a83f833ab5f10df7108f"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.571327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.571366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.571377 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"86770f193e77c89b4d1c3736332251a3c332bd2282fffa5e5bc125b5fdcf2747"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.580746 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.596171 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.610812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.620905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.623904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.643661 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.658701 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.683908 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.705919 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.719916 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723476 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.723500 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.733755 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.749543 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.762037 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.774570 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.789394 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.807832 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.821227 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.825724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.825895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.825963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.826033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.826098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.834204 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.845931 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.861282 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.874688 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.885851 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.898722 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.911672 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.925243 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.929145 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:55Z","lastTransitionTime":"2026-02-18T13:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.944714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.958067 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.968536 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.968668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968695 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:00:03.968667827 +0000 UTC m=+36.464388759 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968774 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.968788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968825 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:03.96881264 +0000 UTC m=+36.464533562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968889 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: E0218 13:59:55.968956 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:03.968942453 +0000 UTC m=+36.464663385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.970856 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:55 crc kubenswrapper[4739]: I0218 13:59:55.991706 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:55Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.031782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.031977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.032058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.032118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.032180 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.070088 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.070169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070323 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070355 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070376 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070475 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:04.07042709 +0000 UTC m=+36.566148042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070591 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070658 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070716 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.070814 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:04.070794539 +0000 UTC m=+36.566515461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.134786 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.237765 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.340615 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.364512 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:55:06.307759645 +0000 UTC Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.409914 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.409942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.409995 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.410044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.410136 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:56 crc kubenswrapper[4739]: E0218 13:59:56.410207 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.443610 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.545829 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577259 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.577343 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.578663 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0" exitCode=0 Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.578685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.592519 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.607328 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.622244 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.632853 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.643720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.651140 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.665160 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.681143 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.701667 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.715704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.731760 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.742195 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.753681 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.754720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.766102 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.777409 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.855982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.856542 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.958562 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:56Z","lastTransitionTime":"2026-02-18T13:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.973690 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-p98v4"] Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.974222 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.976275 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.976389 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.976813 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.977923 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 13:59:56 crc kubenswrapper[4739]: I0218 13:59:56.990981 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:56Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.003984 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.022826 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.038431 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.051699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.061506 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.079895 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15ef6462-8149-4976-b2f8-26123d8081ee-host\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.079956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gwp\" (UniqueName: \"kubernetes.io/projected/15ef6462-8149-4976-b2f8-26123d8081ee-kube-api-access-s4gwp\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.080007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15ef6462-8149-4976-b2f8-26123d8081ee-serviceca\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.080674 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.098545 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.114394 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.130834 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.145252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.162882 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.164499 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.179680 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181220 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15ef6462-8149-4976-b2f8-26123d8081ee-host\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gwp\" (UniqueName: \"kubernetes.io/projected/15ef6462-8149-4976-b2f8-26123d8081ee-kube-api-access-s4gwp\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15ef6462-8149-4976-b2f8-26123d8081ee-serviceca\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.181405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15ef6462-8149-4976-b2f8-26123d8081ee-host\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.182659 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/15ef6462-8149-4976-b2f8-26123d8081ee-serviceca\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.191935 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.201942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gwp\" (UniqueName: \"kubernetes.io/projected/15ef6462-8149-4976-b2f8-26123d8081ee-kube-api-access-s4gwp\") pod \"node-ca-p98v4\" (UID: \"15ef6462-8149-4976-b2f8-26123d8081ee\") " pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.214740 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.230104 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.267694 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.286085 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p98v4" Feb 18 13:59:57 crc kubenswrapper[4739]: W0218 13:59:57.301545 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15ef6462_8149_4976_b2f8_26123d8081ee.slice/crio-36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758 WatchSource:0}: Error finding container 36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758: Status 404 returned error can't find the container with id 36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758 Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.365337 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:30:12.230622366 +0000 UTC Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.370999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.371090 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.475941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.475993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.476005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.476022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.476033 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.578951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.582938 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p98v4" event={"ID":"15ef6462-8149-4976-b2f8-26123d8081ee","Type":"ContainerStarted","Data":"36d98cfedcd49dc014867f00845205a6e4227dc4ec28eb4a858bfbb784675758"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.584908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.588728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.601476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.616346 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.632999 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.646360 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.659485 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.672314 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.680885 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.692510 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.712136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.734850 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.747977 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.764251 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.783263 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.784925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.794793 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.806311 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.824223 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:57Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888375 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.888385 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:57 crc kubenswrapper[4739]: I0218 13:59:57.993493 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:57Z","lastTransitionTime":"2026-02-18T13:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.056159 4739 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095795 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.095832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.198263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.300920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.300975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.300990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.301011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.301026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.366381 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:47:45.99613088 +0000 UTC Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.404295 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.409862 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.409951 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 13:59:58 crc kubenswrapper[4739]: E0218 13:59:58.410310 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 13:59:58 crc kubenswrapper[4739]: E0218 13:59:58.410113 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.409946 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 13:59:58 crc kubenswrapper[4739]: E0218 13:59:58.410490 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.427225 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.451334 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.469655 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.481211 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.507669 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.508924 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.521871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.532880 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.546628 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.562113 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.577490 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.592360 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.595715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p98v4" event={"ID":"15ef6462-8149-4976-b2f8-26123d8081ee","Type":"ContainerStarted","Data":"d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.597535 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b" exitCode=0 Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.597599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.605377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.609617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.625835 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.643102 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.658239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.675736 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.686017 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.705124 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711584 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.711605 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.718960 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.733292 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.744119 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.756245 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.769692 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.784423 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.799644 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.814375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.816127 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.828179 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.839290 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.851984 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.874988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:58Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:58 crc kubenswrapper[4739]: I0218 13:59:58.918349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:58Z","lastTransitionTime":"2026-02-18T13:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.021121 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.124254 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226856 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226912 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.226922 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.328992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.329010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.367621 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:35:12.955593666 +0000 UTC Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.431771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.535133 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.608428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.611922 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148" exitCode=0 Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.612000 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.627935 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637311 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.637432 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.643128 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.657195 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.668517 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.683740 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.702573 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.722279 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739880 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.739984 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.741007 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.756182 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.769401 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.784516 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.811080 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.835831 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.842989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.843000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.852155 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.865683 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T13:59:59Z is after 2025-08-24T17:21:41Z" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 13:59:59 crc kubenswrapper[4739]: I0218 13:59:59.946097 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T13:59:59Z","lastTransitionTime":"2026-02-18T13:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.049853 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.153785 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.256846 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361270 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.361313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.367891 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:48:24.829793243 +0000 UTC Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.410082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.410227 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.410293 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:00 crc kubenswrapper[4739]: E0218 14:00:00.410247 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:00 crc kubenswrapper[4739]: E0218 14:00:00.410436 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:00 crc kubenswrapper[4739]: E0218 14:00:00.410541 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.464374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.566739 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.618441 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c" exitCode=0 Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.618513 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.648914 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.663003 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.668656 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.674580 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.686644 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.700166 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.711985 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.724808 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.736323 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.749832 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.763533 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.772305 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.780022 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.791027 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.800696 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.811537 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.828260 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:00Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875237 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875251 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.875263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:00 crc kubenswrapper[4739]: I0218 14:00:00.978243 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:00Z","lastTransitionTime":"2026-02-18T14:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.081793 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.184728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.287747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.288146 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.368598 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:32:36.426601509 +0000 UTC Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.392728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.392924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.394380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.394492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.394526 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.498688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601525 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.601596 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.630759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.643549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.643885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.643916 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.656806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.675266 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.676582 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.676643 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.691209 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705121 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.705177 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.708134 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.738613 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.752682 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.764608 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.790689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.808231 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.826086 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.844967 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.863488 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.877252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.903889 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.910177 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:01Z","lastTransitionTime":"2026-02-18T14:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.926733 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.937508 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.949694 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.960871 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.972327 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:01 crc kubenswrapper[4739]: I0218 14:00:01.991573 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:01Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.013990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.014095 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.025764 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.039912 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.052281 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.065162 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.077097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.097222 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.114103 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.116969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.117052 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.131410 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.147583 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.163605 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.176547 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:02Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.219187 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321552 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321575 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.321625 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.369770 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:26:53.140437306 +0000 UTC Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.410760 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.410892 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.410905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:02 crc kubenswrapper[4739]: E0218 14:00:02.411033 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:02 crc kubenswrapper[4739]: E0218 14:00:02.411209 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:02 crc kubenswrapper[4739]: E0218 14:00:02.411359 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.424117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.527549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.630746 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.645688 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.732990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.733046 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.835257 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937754 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:02 crc kubenswrapper[4739]: I0218 14:00:02.937774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:02Z","lastTransitionTime":"2026-02-18T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042759 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.042821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.146561 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.249396 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.353370 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.370862 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:00:57.495921341 +0000 UTC Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.455991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456042 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.456066 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.551342 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.558969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.559073 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.571060 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.587560 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.604402 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.622571 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.637710 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.653616 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578" exitCode=0 Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.653751 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.653754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.654536 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.663993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.664085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.686900 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.711577 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.729143 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.742565 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.755861 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.767368 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.770375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.786675 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.798706 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.814948 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.828249 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.840717 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.854557 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870324 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.870771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.883547 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.896132 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.915470 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.927572 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.937303 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.958165 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973825 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:03Z","lastTransitionTime":"2026-02-18T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.973828 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.988006 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:03 crc kubenswrapper[4739]: I0218 14:00:03.998598 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:03Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.058336 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.067827 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.067971 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068040 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.067993699 +0000 UTC m=+52.563714621 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068064 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.068089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068108 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.068095012 +0000 UTC m=+52.563815934 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068192 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.068229 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.068221165 +0000 UTC m=+52.563942087 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.073258 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.075960 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.076420 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.144967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.145963 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.160300 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.164508 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.169459 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.170357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170573 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170604 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170621 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.170695 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.170675305 +0000 UTC m=+52.666396237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171057 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171080 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171091 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.171125 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:20.171111655 +0000 UTC m=+52.666832597 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.185577 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.191783 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.211423 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216484 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216548 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.216593 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.233934 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237699 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.237820 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.251382 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.251550 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.253769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.254468 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357516 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.357558 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.371558 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:14:06.041126619 +0000 UTC Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.409988 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.410042 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.410117 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.410162 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.410329 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:04 crc kubenswrapper[4739]: E0218 14:00:04.410433 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.460688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.563582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.564377 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.661121 4739 generic.go:334] "Generic (PLEG): container finished" podID="617869cd-510c-4491-a8f7-1a7bb2656f26" containerID="64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e" exitCode=0 Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.661179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerDied","Data":"64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666925 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.666959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.681624 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.696459 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.711494 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.724535 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.751217 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.765760 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.768999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.769009 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.775782 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.787704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.799581 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.811766 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.823333 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.835285 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.849486 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.863137 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.870901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.871433 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.877793 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:04Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:04 crc kubenswrapper[4739]: I0218 14:00:04.973433 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:04Z","lastTransitionTime":"2026-02-18T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.076844 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180086 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180118 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.180128 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.282538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.282635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.283027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.283129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.283149 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.372935 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:05:37.306191861 +0000 UTC Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385968 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.385997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.386017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.489528 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.592729 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.668226 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" event={"ID":"617869cd-510c-4491-a8f7-1a7bb2656f26","Type":"ContainerStarted","Data":"6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.689787 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.695639 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.706404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.720930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.738559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.756177 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.766856 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.793380 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.797656 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.813757 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.831217 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.848962 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.874610 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.887726 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.898539 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.899983 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:05Z","lastTransitionTime":"2026-02-18T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.911246 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:05 crc kubenswrapper[4739]: I0218 14:00:05.924686 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:05Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002196 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002247 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.002275 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.104899 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.207222 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.309996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.310010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.373713 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:23:21.598073729 +0000 UTC Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.409403 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.409522 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:06 crc kubenswrapper[4739]: E0218 14:00:06.409617 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.409541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:06 crc kubenswrapper[4739]: E0218 14:00:06.409717 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:06 crc kubenswrapper[4739]: E0218 14:00:06.409834 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.413222 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515167 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.515229 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.620645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.621421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.724743 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827141 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.827172 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:06 crc kubenswrapper[4739]: I0218 14:00:06.930829 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:06Z","lastTransitionTime":"2026-02-18T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033643 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.033712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136822 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.136891 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240226 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.240244 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.343660 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.394156 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:16:14.391688026 +0000 UTC Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.445745 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.548728 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.573375 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr"] Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.574229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.576393 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.578031 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.604973 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.622849 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.638112 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.650921 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.651899 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.663392 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.677061 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.678359 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/0.log" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.680687 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550" exitCode=1 Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.680725 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.681357 4739 scope.go:117] "RemoveContainer" containerID="e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703196 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703281 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.703326 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99ghl\" (UniqueName: \"kubernetes.io/projected/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-kube-api-access-99ghl\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.704094 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.717087 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.731877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.743093 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753408 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.753478 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.762694 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.775259 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.787245 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.798046 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.805242 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.805504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.805795 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.806047 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99ghl\" (UniqueName: \"kubernetes.io/projected/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-kube-api-access-99ghl\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.806265 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.807008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.814983 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.816425 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.832284 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99ghl\" (UniqueName: \"kubernetes.io/projected/fdde800e-9fbf-44dc-af43-d9cfc15dfecd-kube-api-access-99ghl\") pod \"ovnkube-control-plane-749d76644c-9rjzr\" (UID: \"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.835308 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.854116 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.855792 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.873881 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.889974 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.896206 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.902895 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: W0218 14:00:07.911401 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdde800e_9fbf_44dc_af43_d9cfc15dfecd.slice/crio-005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16 WatchSource:0}: Error finding container 005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16: Status 404 returned error can't find the container with id 005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16 Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.919456 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.930715 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.944734 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.958560 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:07Z","lastTransitionTime":"2026-02-18T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.960427 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.971795 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.984838 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:07 crc kubenswrapper[4739]: I0218 14:00:07.997988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:07Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.014729 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:06Z\\\",\\\"message\\\":\\\"ift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809552 6012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809795 6012 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809895 6012 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809939 6012 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809992 6012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810089 6012 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810355 6012 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810799 6012 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:00:06.811153 6012 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.025432 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.045930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.061833 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.072214 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163822 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.163890 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265783 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.265876 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368093 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368127 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.368160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.394595 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:22:05.387590253 +0000 UTC Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.409948 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.410035 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.410064 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.410116 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.410182 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.410222 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.431730 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.469070 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470894 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470919 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.470929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.517767 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.537785 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.550166 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.561670 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.571632 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573001 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.573081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.584651 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.596908 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.611052 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.622704 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.634080 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.650406 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.671643 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:06Z\\\",\\\"message\\\":\\\"ift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809552 6012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809795 6012 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809895 6012 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809939 6012 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809992 6012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810089 6012 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810355 6012 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810799 6012 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:00:06.811153 6012 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675742 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.675818 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.676655 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nhkmm"] Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.677306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.677402 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.686896 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.687430 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/0.log" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.688101 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.691557 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" exitCode=1 Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.691615 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.691660 4739 scope.go:117] "RemoveContainer" containerID="e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.692294 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.692427 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.694138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" event={"ID":"fdde800e-9fbf-44dc-af43-d9cfc15dfecd","Type":"ContainerStarted","Data":"74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.694180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" event={"ID":"fdde800e-9fbf-44dc-af43-d9cfc15dfecd","Type":"ContainerStarted","Data":"e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.694193 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" event={"ID":"fdde800e-9fbf-44dc-af43-d9cfc15dfecd","Type":"ContainerStarted","Data":"005404f31b97d22e0cb9749d7c7a5c39bbdbd8ae2922dae8226779eb67e69e16"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.703118 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.716121 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.726559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.748892 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.763678 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.776097 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778125 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.778160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.786809 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.798463 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.815476 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.819920 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx99g\" (UniqueName: \"kubernetes.io/projected/151d76ab-14d7-4b0b-a930-785156818a3e-kube-api-access-mx99g\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.820016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.827861 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.838669 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.853759 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.870371 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879892 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.879985 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.882267 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.898783 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.912127 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.921799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx99g\" (UniqueName: \"kubernetes.io/projected/151d76ab-14d7-4b0b-a930-785156818a3e-kube-api-access-mx99g\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.921853 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.921988 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:08 crc kubenswrapper[4739]: E0218 14:00:08.922033 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:09.422018327 +0000 UTC m=+41.917739239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.942965 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx99g\" (UniqueName: \"kubernetes.io/projected/151d76ab-14d7-4b0b-a930-785156818a3e-kube-api-access-mx99g\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.943271 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e30949a783e54c896f531440d4aebffbb04bc63ab0758bbee0757765f15d1550\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:06Z\\\",\\\"message\\\":\\\"ift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809552 6012 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809795 6012 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809895 6012 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 14:00:06.809939 6012 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.809992 6012 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810089 6012 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810355 6012 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 14:00:06.810799 6012 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 14:00:06.811153 6012 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.961803 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:08Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:08 crc kubenswrapper[4739]: I0218 14:00:08.982904 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:08Z","lastTransitionTime":"2026-02-18T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.085985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.086124 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189109 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.189170 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.292252 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.394822 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:09:56.359530018 +0000 UTC Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.395820 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.428325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:09 crc kubenswrapper[4739]: E0218 14:00:09.428546 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:09 crc kubenswrapper[4739]: E0218 14:00:09.428661 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:10.42863872 +0000 UTC m=+42.924359682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.498753 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.601168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.698393 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702859 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.702871 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.806786 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.908930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.908988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.909006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.909031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:09 crc kubenswrapper[4739]: I0218 14:00:09.909048 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:09Z","lastTransitionTime":"2026-02-18T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.011104 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.113988 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216739 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.216774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.319738 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.395556 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:48:23.169473554 +0000 UTC Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410144 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.410391 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410398 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410503 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410671 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.410766 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.423306 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.437199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.437345 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.437436 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:12.437417909 +0000 UTC m=+44.933138831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.526663 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.587589 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.588903 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:10 crc kubenswrapper[4739]: E0218 14:00:10.589210 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.609724 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.625145 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.629585 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.639928 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.675412 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.690585 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.717949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.731464 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.732996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.733014 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.756949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.776836 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.788858 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.803958 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.819640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.835146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.836509 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.853252 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.868183 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.886288 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.900345 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:10Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.938993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:10 crc kubenswrapper[4739]: I0218 14:00:10.939785 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:10Z","lastTransitionTime":"2026-02-18T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042711 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.042722 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.145164 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247928 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.247966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350219 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.350291 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.396247 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:53:14.756076496 +0000 UTC Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452538 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.452608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.555688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.658617 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.760933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.761082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864966 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.864996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.865005 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968180 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:11 crc kubenswrapper[4739]: I0218 14:00:11.968198 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:11Z","lastTransitionTime":"2026-02-18T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071781 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.071803 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.174405 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276439 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.276466 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.378670 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.397181 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 15:15:12.754851634 +0000 UTC Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409588 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409588 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.409646 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.409769 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.409879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.409975 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.410070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.471329 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.471518 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:12 crc kubenswrapper[4739]: E0218 14:00:12.471886 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:16.471867664 +0000 UTC m=+48.967588586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.481998 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.585359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.688756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.791666 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893938 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.893991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.894013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998828 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:12 crc kubenswrapper[4739]: I0218 14:00:12.998850 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:12Z","lastTransitionTime":"2026-02-18T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.101082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.204154 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.307210 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.398231 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:54:25.081633871 +0000 UTC Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410107 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.410142 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.513777 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616193 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.616287 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.719160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.821828 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.924945 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:13 crc kubenswrapper[4739]: I0218 14:00:13.925081 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:13Z","lastTransitionTime":"2026-02-18T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.028324 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.130951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.130996 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.131004 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.131019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.131028 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.234160 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.337751 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.355898 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.355963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.355986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.356016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.356038 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.377533 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382723 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.382756 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.398963 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:02:36.669848772 +0000 UTC Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.403995 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.408978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409520 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.409582 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409620 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.409654 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.409756 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.409845 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.410071 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.427154 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.431622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.450566 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.455607 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.469350 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:14Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:14 crc kubenswrapper[4739]: E0218 14:00:14.469545 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.471285 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574627 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.574692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.677993 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781156 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.781254 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884116 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.884128 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987950 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:14 crc kubenswrapper[4739]: I0218 14:00:14.987995 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:14Z","lastTransitionTime":"2026-02-18T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.090930 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193484 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.193544 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.295733 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.398979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399067 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.399109 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:44:10.342885834 +0000 UTC Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501310 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.501471 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.604543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.604931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.605100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.605135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.605153 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.708772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.709742 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.812916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.812982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.813003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.813033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.813054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916645 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:15 crc kubenswrapper[4739]: I0218 14:00:15.916662 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:15Z","lastTransitionTime":"2026-02-18T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.019800 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.122816 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.225320 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.328645 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.399649 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:32:01.804771739 +0000 UTC Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410021 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410160 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.410395 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.410947 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.411038 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.411165 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.411293 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.431378 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.515192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.515505 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:16 crc kubenswrapper[4739]: E0218 14:00:16.515614 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:24.515586527 +0000 UTC m=+57.011307489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.534668 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638817 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.638951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.639112 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742068 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.742189 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.845232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.948987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:16 crc kubenswrapper[4739]: I0218 14:00:16.949010 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:16Z","lastTransitionTime":"2026-02-18T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.052397 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.155772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.259872 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.364712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.400845 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:17:26.935431723 +0000 UTC Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467579 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.467588 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.570710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.673323 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.776626 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.879894 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983492 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:17 crc kubenswrapper[4739]: I0218 14:00:17.983555 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:17Z","lastTransitionTime":"2026-02-18T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086556 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.086604 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.189651 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.292398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.395679 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.402042 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:05:50.053467024 +0000 UTC Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409592 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409630 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.409704 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.409806 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.409983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.410162 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:18 crc kubenswrapper[4739]: E0218 14:00:18.410250 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.430472 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.444699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.457819 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.483393 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.498388 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.502521 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.526731 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.540614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.552934 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.566400 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.580358 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.592242 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.600461 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.602937 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.616662 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.629349 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.641334 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.660778 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.674212 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:18Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.702951 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806091 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.806206 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:18 crc kubenswrapper[4739]: I0218 14:00:18.913429 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:18Z","lastTransitionTime":"2026-02-18T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.016562 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119845 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.119864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.222776 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.325416 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.402557 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:13:04.532160355 +0000 UTC Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428101 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.428194 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.530972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531069 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.531122 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.634350 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.738237 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.841312 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.845020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.860987 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.881590 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.904649 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.921357 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.939812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.944851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:19Z","lastTransitionTime":"2026-02-18T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.962289 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.977699 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:19 crc kubenswrapper[4739]: I0218 14:00:19.992226 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:19Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.008877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.023700 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.039916 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.047821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.064990 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.079514 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.097762 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.112389 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.126652 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.148145 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.151350 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.153573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.153685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.153716 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.153792 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.153831 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.153818836 +0000 UTC m=+84.649539758 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.153950 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.153943679 +0000 UTC m=+84.649664591 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.154002 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.154021 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.15401578 +0000 UTC m=+84.649736702 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.166618 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:20Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254150 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254196 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254321 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254337 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254334 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254376 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254392 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254350 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254473 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.254430751 +0000 UTC m=+84.750151763 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.254525 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:00:52.254504633 +0000 UTC m=+84.750225665 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254528 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.254548 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357167 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.357244 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.402919 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 12:07:43.778554313 +0000 UTC Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410345 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410590 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.410648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410782 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410882 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:20 crc kubenswrapper[4739]: E0218 14:00:20.410973 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.460900 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.563967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564051 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.564093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.667699 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.770336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.873338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:20 crc kubenswrapper[4739]: I0218 14:00:20.976252 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:20Z","lastTransitionTime":"2026-02-18T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.078904 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181462 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181509 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.181556 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.283922 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.387969 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.404079 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:23:08.351498249 +0000 UTC Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.489975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490035 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.490057 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.592703 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.695263 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798903 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.798919 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:21 crc kubenswrapper[4739]: I0218 14:00:21.901779 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:21Z","lastTransitionTime":"2026-02-18T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.004864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.108581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.212811 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315237 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.315246 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.404655 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:54:45.23368143 +0000 UTC Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410182 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410233 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.410358 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.410644 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.410740 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.411089 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:22 crc kubenswrapper[4739]: E0218 14:00:22.411195 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417653 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.417669 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.520594 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.623570 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725937 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.725984 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.829376 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932667 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:22 crc kubenswrapper[4739]: I0218 14:00:22.932839 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:22Z","lastTransitionTime":"2026-02-18T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035620 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.035633 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.138806 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241230 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.241268 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343631 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.343641 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.404972 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:12:22.618446957 +0000 UTC Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446096 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446139 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.446156 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549357 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.549544 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653221 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.653248 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.756116 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859774 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859823 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.859837 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962897 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:23 crc kubenswrapper[4739]: I0218 14:00:23.962911 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:23Z","lastTransitionTime":"2026-02-18T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.064952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.064997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.065008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.065025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.065039 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168260 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.168340 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270348 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.270483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.373546 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.405364 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:11:51.842736135 +0000 UTC Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409612 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409674 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.409847 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.409943 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.410038 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.410158 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.410619 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.410920 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.477746 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.582739 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.604046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.604218 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.604294 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:00:40.604271334 +0000 UTC m=+73.099992286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.692888 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.756855 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.761761 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.763116 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.783100 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796286 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.796648 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.807858 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808740 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.808755 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.829052 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.829642 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.850137 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.885225 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.885330 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.890835 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.915832 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.920690 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.930423 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.945779 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.946373 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.950957 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.967547 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: E0218 14:00:24.967656 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.969212 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:24Z","lastTransitionTime":"2026-02-18T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.973146 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.985951 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:24 crc kubenswrapper[4739]: I0218 14:00:24.998272 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:24Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.009045 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.023406 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.034721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.045855 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.056185 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.069103 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.071634 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.086173 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.102559 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.113291 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173951 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.173982 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.277489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.380883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.406401 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:40:05.566871338 +0000 UTC Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.483471 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.585983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586070 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.586084 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689398 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.689564 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.768614 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.769360 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/1.log" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.773013 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" exitCode=1 Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.773049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.773115 4739 scope.go:117] "RemoveContainer" containerID="9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.774258 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:25 crc kubenswrapper[4739]: E0218 14:00:25.776584 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.790864 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.792282 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.805283 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.818438 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.834953 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.866877 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9125909e8808e391d55a7f18eae322fa5183a861bcccc0c8fbc5f1502cf836ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"ess-operator/ingress-operator-5b745b69d9-464cg\\\\nI0218 14:00:08.525033 6213 factory.go:1336] Added *v1.Pod event handler 3\\\\nI0218 14:00:08.525072 6213 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI0218 14:00:08.525084 6213 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-ingress-operator/ingress-operator-5b745b69d9-464cg. OVN-Kubernetes controller took 2.0241e-05 seconds. No OVN measurement.\\\\nI0218 14:00:08.525109 6213 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:08.525187 6213 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI0218 14:00:08.525196 6213 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:08.525237 6213 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:08.525247 6213 factory.go:656] Stopping watch factory\\\\nI0218 14:00:08.525292 6213 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:08.525261 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:08.525376 6213 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:08.525476 6213 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.884862 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.895576 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.915689 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.930939 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.940812 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.956404 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.970711 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.983187 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.994188 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:25Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:25 crc kubenswrapper[4739]: I0218 14:00:25.998599 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:25Z","lastTransitionTime":"2026-02-18T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.008180 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.020218 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.034212 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.045528 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.053652 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101376 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.101421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.204736 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.307679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.307990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.308055 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.308119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.308188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.406584 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:37:39.122772126 +0000 UTC Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409581 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409772 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409655 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.409610 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410169 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410370 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410619 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.410666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.410761 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.410900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.410972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.411003 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.411020 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.514363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.514825 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.515024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.515178 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.515343 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.618926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.619821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.723993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.724144 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.779736 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.785031 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:26 crc kubenswrapper[4739]: E0218 14:00:26.785477 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.803763 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.823911 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827485 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827531 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.827553 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.841640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.861491 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.881805 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.900930 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.918840 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.930663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.930902 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.931052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.931194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.931327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:26Z","lastTransitionTime":"2026-02-18T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.941375 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.957140 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.975576 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:26 crc kubenswrapper[4739]: I0218 14:00:26.995158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:26Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.007868 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.023377 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.035954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.036098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.056574 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.070782 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.095130 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.116571 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.129969 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:27Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.138969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139011 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.139074 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.241940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242001 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.242052 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344587 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.344670 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.407725 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:16:05.309302459 +0000 UTC Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.448390 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.551156 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654639 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.654697 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.757337 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.860307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.860690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.860979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.861184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.861342 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963833 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:27 crc kubenswrapper[4739]: I0218 14:00:27.963929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:27Z","lastTransitionTime":"2026-02-18T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.067601 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170234 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.170265 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.272539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.375649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.376312 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.408015 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 16:53:46.993999686 +0000 UTC Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409407 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409437 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409516 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.409549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.409659 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.409809 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.409960 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:28 crc kubenswrapper[4739]: E0218 14:00:28.410073 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.437006 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.454830 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.470739 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.479073 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.489988 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.505163 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.545269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.562798 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.580202 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581891 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.581972 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.598721 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.614504 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.652137 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.677648 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.684386 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.692162 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.705347 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.721804 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.736578 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.751015 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.765546 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:28Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.788849 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.891916 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994762 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:28 crc kubenswrapper[4739]: I0218 14:00:28.994823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:28Z","lastTransitionTime":"2026-02-18T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.098544 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.200995 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.303662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.304463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.407932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.408299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409290 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:29:26.959316227 +0000 UTC Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.409504 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.512502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.512956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.513062 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.513165 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.513261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616330 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.616439 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719718 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.719805 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.822494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.822901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.823041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.823197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.823336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:29 crc kubenswrapper[4739]: I0218 14:00:29.926679 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:29Z","lastTransitionTime":"2026-02-18T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029359 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.029471 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131773 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.131810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236651 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236760 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.236821 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339284 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.339344 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410125 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:17:04.047886194 +0000 UTC Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410334 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410382 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410406 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410490 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.410738 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410797 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410927 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:30 crc kubenswrapper[4739]: E0218 14:00:30.410657 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.442374 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545262 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.545396 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.647948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.647998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.648010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.648027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.648040 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.751671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.854597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.957942 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.957994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.958006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.958024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:30 crc kubenswrapper[4739]: I0218 14:00:30.958035 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:30Z","lastTransitionTime":"2026-02-18T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.061678 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.165336 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.268356 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.370900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.370991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.371015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.371041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.371060 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.410652 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:21:35.406133448 +0000 UTC Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.473990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.474000 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.575998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.576086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.678576 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780830 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.780864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883785 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.883889 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.986992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:31 crc kubenswrapper[4739]: I0218 14:00:31.987025 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:31Z","lastTransitionTime":"2026-02-18T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089671 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.089732 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192978 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.192995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.193007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.295959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296015 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.296091 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.398354 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.409795 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.409863 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.410078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.410127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410241 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410329 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:32 crc kubenswrapper[4739]: E0218 14:00:32.410460 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.411098 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:52:38.566043259 +0000 UTC Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.500711 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.603766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.705952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.705997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.706007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.706022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.706031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808255 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808298 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.808322 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910518 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:32 crc kubenswrapper[4739]: I0218 14:00:32.910581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:32Z","lastTransitionTime":"2026-02-18T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.012979 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.116232 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218557 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.218592 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321089 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.321116 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.412075 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:25:14.628610914 +0000 UTC Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.423772 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.525986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.526117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.629692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732847 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.732870 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834791 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.834823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937797 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:33 crc kubenswrapper[4739]: I0218 14:00:33.937817 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:33Z","lastTransitionTime":"2026-02-18T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.039683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.039947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.040043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.040144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.040209 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142308 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.142428 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244725 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244772 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.244816 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346692 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346724 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.346755 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.409626 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409641 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.409764 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.409978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.410290 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:34 crc kubenswrapper[4739]: E0218 14:00:34.410197 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.412777 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:16:10.107609734 +0000 UTC Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448911 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.448963 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551135 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.551168 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653374 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.653412 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755907 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.755935 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.858659 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:34 crc kubenswrapper[4739]: I0218 14:00:34.960543 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:34Z","lastTransitionTime":"2026-02-18T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024648 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024661 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.024671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.036314 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039516 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039571 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.039581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.057672 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.060927 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.060979 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.060992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.061010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.061023 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.075317 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078931 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.078995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.079004 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.091529 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095579 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.095653 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.105886 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:35Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:35 crc kubenswrapper[4739]: E0218 14:00:35.105992 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.107549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210124 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.210169 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313207 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.313273 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.413587 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:36:34.36123046 +0000 UTC Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437813 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.437823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.539842 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.641934 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.745319 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847517 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.847609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949517 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:35 crc kubenswrapper[4739]: I0218 14:00:35.949628 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:35Z","lastTransitionTime":"2026-02-18T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.051944 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.051997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.052010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.052025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.052035 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154325 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.154408 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257354 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.257380 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360120 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.360181 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409604 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409631 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409725 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.409827 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410018 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410166 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410239 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:36 crc kubenswrapper[4739]: E0218 14:00:36.410280 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.413685 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:52:09.572906847 +0000 UTC Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.462914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.462974 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.463132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.463160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.463171 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.566090 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.668916 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772218 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.772276 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.874990 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:36 crc kubenswrapper[4739]: I0218 14:00:36.977360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:36Z","lastTransitionTime":"2026-02-18T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080621 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.080695 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.184340 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.184721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.184918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.185087 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.185253 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287838 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.287925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.391986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.392100 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.414800 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:46:08.575717429 +0000 UTC Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494695 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.494710 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.598228 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700706 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700741 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.700778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803188 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803197 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.803218 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:37 crc kubenswrapper[4739]: I0218 14:00:37.905775 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:37Z","lastTransitionTime":"2026-02-18T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008394 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008412 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.008426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110605 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.110629 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212302 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.212311 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314458 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.314489 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409627 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.409698 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.409790 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.409845 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.410144 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:38 crc kubenswrapper[4739]: E0218 14:00:38.411717 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.414919 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:41:09.385098714 +0000 UTC Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416679 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416738 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.416778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.422904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.435107 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.452433 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.465727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.476395 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.486316 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.499750 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.515168 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518623 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.518695 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.532235 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.544653 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.557814 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.566707 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.576129 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.585320 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.593728 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.608368 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.618664 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.621326 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.628478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:38Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724157 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.724178 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.826537 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928783 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:38 crc kubenswrapper[4739]: I0218 14:00:38.928851 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:38Z","lastTransitionTime":"2026-02-18T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031204 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.031239 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.133960 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.134060 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.236538 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.338958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.339030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.415505 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:37:06.327437664 +0000 UTC Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441410 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.441466 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.544579 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.647532 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.750682 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853259 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.853292 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955827 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:39 crc kubenswrapper[4739]: I0218 14:00:39.955858 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:39Z","lastTransitionTime":"2026-02-18T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057545 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057558 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.057588 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159638 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159646 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.159667 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262235 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262279 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.262302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.365623 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410038 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.410164 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410248 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410519 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410668 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.410685 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.411280 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.411425 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.415731 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:54:12.728179733 +0000 UTC Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467904 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.467915 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.570215 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.670382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.670542 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:40 crc kubenswrapper[4739]: E0218 14:00:40.670921 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:01:12.670898104 +0000 UTC m=+105.166619046 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672491 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.672515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774335 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.774414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876481 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876490 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.876514 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979318 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:40 crc kubenswrapper[4739]: I0218 14:00:40.979409 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:40Z","lastTransitionTime":"2026-02-18T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.082973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.083243 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.185986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186033 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.186054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288027 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.288167 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.392261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.416764 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:35:42.730398323 +0000 UTC Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.495584 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598244 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.598425 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.700963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.701051 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803327 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803385 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.803468 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906526 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:41 crc kubenswrapper[4739]: I0218 14:00:41.906559 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:41Z","lastTransitionTime":"2026-02-18T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009307 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.009397 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.111969 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.112047 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214764 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214829 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.214850 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318084 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318133 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318141 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.318169 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409632 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409764 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.409766 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409632 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.409649 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.409831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.409881 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:42 crc kubenswrapper[4739]: E0218 14:00:42.410146 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.416915 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:45:39.818271571 +0000 UTC Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420126 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.420150 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522589 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.522620 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.624997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625058 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.625088 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.727915 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.727990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.728009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.728026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.728038 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831114 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831147 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831155 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.831178 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933254 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:42 crc kubenswrapper[4739]: I0218 14:00:42.933331 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:42Z","lastTransitionTime":"2026-02-18T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036440 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.036539 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.138929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.242922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.242972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.242985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.243005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.243016 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.345832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.417911 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:23:08.604207077 +0000 UTC Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447813 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.447883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.549863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.550345 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655597 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.655614 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.758287 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.840351 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/0.log" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.840395 4739 generic.go:334] "Generic (PLEG): container finished" podID="ec8fd6de-f77b-48a7-848f-a1b94e866365" containerID="f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c" exitCode=1 Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.840430 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerDied","Data":"f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.842242 4739 scope.go:117] "RemoveContainer" containerID="f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.852318 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.869928 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878306 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.878330 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.884727 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.900741 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.918432 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.937105 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.951244 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.963967 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.977940 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980710 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980786 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.980797 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:43Z","lastTransitionTime":"2026-02-18T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:43 crc kubenswrapper[4739]: I0218 14:00:43.992185 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:43Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.009964 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.023005 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.037949 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.051955 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.074338 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083503 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.083512 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.105059 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.129546 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.141239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.186424 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288857 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288956 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.288965 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391143 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.391180 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.409952 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.410011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.409963 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.409955 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410187 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410346 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:44 crc kubenswrapper[4739]: E0218 14:00:44.410403 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.418423 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:19:21.237622149 +0000 UTC Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493932 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.493983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.494005 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.595983 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.698976 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699037 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.699090 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801662 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801716 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.801743 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.845268 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/0.log" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.845356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.856720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.867233 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.878243 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.892720 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.901748 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904258 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904288 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.904297 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:44Z","lastTransitionTime":"2026-02-18T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.918478 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.934904 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.946397 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.954762 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:44 crc kubenswrapper[4739]: I0218 14:00:44.965776 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.003554 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:44Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005837 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005896 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.005905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.025363 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.038663 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.050692 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.061640 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.077194 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.099409 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108056 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108067 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.108093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.158566 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.210984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211025 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.211058 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.314411 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418103 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.418670 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:50:47.8518978 +0000 UTC Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.448242 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.468598 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473292 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473338 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.473378 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.486360 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490201 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.490256 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.502924 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506314 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506402 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.506414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.523352 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527088 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.527188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.540249 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:45Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:45 crc kubenswrapper[4739]: E0218 14:00:45.540435 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541960 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.541997 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.542009 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.644934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645009 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.645055 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747745 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.747823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850361 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.850434 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:45 crc kubenswrapper[4739]: I0218 14:00:45.953517 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:45Z","lastTransitionTime":"2026-02-18T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056225 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.056362 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159086 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159148 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159170 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159200 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.159220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263031 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263123 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263152 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.263174 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365465 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.365483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.409934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.410001 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410135 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.410237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.410301 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410381 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410504 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:46 crc kubenswrapper[4739]: E0218 14:00:46.410766 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.419130 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:03:32.956348702 +0000 UTC Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467700 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.467717 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570512 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570585 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570602 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.570644 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673829 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.673898 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.775959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.776098 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879407 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879424 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879480 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.879497 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:46 crc kubenswrapper[4739]: I0218 14:00:46.982560 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:46Z","lastTransitionTime":"2026-02-18T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085029 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085153 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085179 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.085200 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188065 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188104 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.188121 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290422 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290461 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.290473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.393850 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.393963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.393989 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.394018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.394043 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.419754 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:07:19.316949079 +0000 UTC Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497174 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497183 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.497208 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.600193 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704169 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.704233 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807283 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807341 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.807414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910249 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910277 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:47 crc kubenswrapper[4739]: I0218 14:00:47.910304 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:47Z","lastTransitionTime":"2026-02-18T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012684 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012736 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012755 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.012792 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115321 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115368 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.115404 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218181 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218232 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.218241 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320542 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320553 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.320579 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.409882 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410000 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.410072 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.410088 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410142 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.410093 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410225 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:48 crc kubenswrapper[4739]: E0218 14:00:48.410272 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.420762 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:05:43.388258627 +0000 UTC Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422323 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422371 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422392 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.422404 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.428825 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.445151 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.462800 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.478139 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.494609 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.512282 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524493 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524548 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524564 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.524574 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.529392 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.544806 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.574335 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.592239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.623399 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627784 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.627902 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.643269 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.657611 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.674265 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.692732 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.710511 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.725813 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.730316 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.745849 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:48Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.832913 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833497 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833513 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.833523 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.935922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.935990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.936005 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.936021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:48 crc kubenswrapper[4739]: I0218 14:00:48.936031 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:48Z","lastTransitionTime":"2026-02-18T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.038688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.038986 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.039079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.039171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.039253 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142075 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142839 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.142962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.143080 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.246413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.246882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.246983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.247223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.247424 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350611 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.350624 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.421252 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:08:06.833244214 +0000 UTC Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.453771 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454092 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454333 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.454737 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557414 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.557463 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660924 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660940 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.660972 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764140 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.764248 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867658 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867669 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867690 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.867703 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970893 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:49 crc kubenswrapper[4739]: I0218 14:00:49.970945 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:49Z","lastTransitionTime":"2026-02-18T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073830 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.073885 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176501 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176546 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.176567 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278918 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278933 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278955 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.278972 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382436 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382511 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382524 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382543 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.382556 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409552 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409782 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.409806 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409864 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:50 crc kubenswrapper[4739]: E0218 14:00:50.409924 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.422372 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:37:32.44963749 +0000 UTC Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.485713 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.588984 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.589118 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691858 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.691875 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.795309 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897815 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:50 crc kubenswrapper[4739]: I0218 14:00:50.897832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:50Z","lastTransitionTime":"2026-02-18T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000581 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000687 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.000704 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103388 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.103421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205766 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205776 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205793 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.205802 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308800 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308820 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.308867 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411746 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.411783 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.423200 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:26:57.275205257 +0000 UTC Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514586 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.514633 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617401 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617469 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617499 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.617510 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.719973 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720236 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.720262 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823737 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.823791 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926777 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926824 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926835 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:51 crc kubenswrapper[4739]: I0218 14:00:51.926864 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:51Z","lastTransitionTime":"2026-02-18T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030150 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030173 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.030223 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132717 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132733 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132757 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.132774 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.193992 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.194169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194278 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.194239733 +0000 UTC m=+148.689960695 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194302 4739 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.194383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194529 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.194463268 +0000 UTC m=+148.690184210 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194524 4739 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.194659 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.194648103 +0000 UTC m=+148.690369035 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.234898 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.295493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.295570 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295653 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295674 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295686 4739 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295742 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.29572683 +0000 UTC m=+148.791447752 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295740 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295779 4739 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295793 4739 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.295855 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.295837022 +0000 UTC m=+148.791557954 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337508 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.337639 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.409888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.409947 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.409982 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.410497 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410541 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410622 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410767 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:52 crc kubenswrapper[4739]: E0218 14:00:52.410849 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.411074 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.424198 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:53:28.662025902 +0000 UTC Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445825 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.445918 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548622 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.548647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651245 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651253 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651266 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.651275 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754117 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.754131 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.855976 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.872513 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.874590 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.875644 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.886004 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.906614 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.918002 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.930293 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.942592 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958801 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958809 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958823 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.958835 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:52Z","lastTransitionTime":"2026-02-18T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.959356 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.970989 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:52 crc kubenswrapper[4739]: I0218 14:00:52.985401 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.000053 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:52Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.015462 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.027332 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.040744 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.053841 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.060597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.067735 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.086381 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.103859 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.117186 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.136584 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.163491 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266534 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.266687 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.368895 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.422675 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.425198 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:40:14.544325039 +0000 UTC Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471504 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471570 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.471581 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574387 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.574426 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.677922 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678268 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678405 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.678691 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780701 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780753 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.780777 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.880261 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.881541 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/2.log" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889097 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.889829 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.890558 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" exitCode=1 Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.890656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed"} Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.890711 4739 scope.go:117] "RemoveContainer" containerID="b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.892101 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:00:53 crc kubenswrapper[4739]: E0218 14:00:53.892586 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.910305 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.942878 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.961421 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.972979 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.983294 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992429 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992541 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:53 crc kubenswrapper[4739]: I0218 14:00:53.992555 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:53Z","lastTransitionTime":"2026-02-18T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.001079 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:53Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.017177 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.035136 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.053121 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.073503 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.088799 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.094959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095018 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095036 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095057 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.095074 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.105694 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b669cf4-28b3-484f-925b-49d6fab4e165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.125129 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.136044 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.149998 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.177103 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2e0b212f0fbfc752e2d9b63b796c3eedab6df780aef7ed78ce963f6dca3440d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:25Z\\\",\\\"message\\\":\\\" 9\\\\nI0218 14:00:25.413610 6420 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 14:00:25.414952 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 14:00:25.415022 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 14:00:25.415948 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:25.415976 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:25.416013 6420 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 14:00:25.416024 6420 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 14:00:25.416036 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 14:00:25.416041 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 14:00:25.416069 6420 factory.go:656] Stopping watch factory\\\\nI0218 14:00:25.416088 6420 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:25.416118 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:25.416133 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:25.416141 6420 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 14:00:25.416149 6420 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 14:00:25.416159 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:53Z\\\",\\\"message\\\":\\\"go:551] Creating *factory.egressNode crc took: 6.568708ms\\\\nI0218 14:00:53.294863 6867 factory.go:1336] Added *v1.Node event handler 7\\\\nI0218 14:00:53.294912 6867 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 14:00:53.294940 6867 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0218 14:00:53.294960 6867 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 2.349587ms\\\\nI0218 14:00:53.294984 6867 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:53.295010 6867 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:53.295052 6867 factory.go:656] Stopping watch factory\\\\nI0218 14:00:53.295091 6867 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:53.295111 6867 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:53.295330 6867 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 14:00:53.295466 6867 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 14:00:53.295513 6867 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:53.295541 6867 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:53.295620 6867 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.192536 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197580 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197702 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.197745 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.206869 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.226183 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300184 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300291 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.300313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.403945 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410396 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410426 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.410437 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410675 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410718 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.410796 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.425749 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:12:10.104358458 +0000 UTC Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506527 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506560 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506588 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.506599 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608790 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.608822 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711763 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711884 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.711908 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.815239 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.894644 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.897888 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:00:54 crc kubenswrapper[4739]: E0218 14:00:54.898062 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.912580 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918081 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.918093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:54Z","lastTransitionTime":"2026-02-18T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.926144 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.946957 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.980898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:53Z\\\",\\\"message\\\":\\\"go:551] Creating *factory.egressNode crc took: 6.568708ms\\\\nI0218 14:00:53.294863 6867 factory.go:1336] Added *v1.Node event handler 7\\\\nI0218 14:00:53.294912 6867 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 14:00:53.294940 6867 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0218 14:00:53.294960 6867 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 2.349587ms\\\\nI0218 14:00:53.294984 6867 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:53.295010 6867 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:53.295052 6867 factory.go:656] Stopping watch factory\\\\nI0218 14:00:53.295091 6867 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:53.295111 6867 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:53.295330 6867 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 14:00:53.295466 6867 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 14:00:53.295513 6867 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:53.295541 6867 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:53.295620 6867 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:54 crc kubenswrapper[4739]: I0218 14:00:54.998096 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:54Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.012264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021720 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021749 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.021760 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.030909 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.045937 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.079688 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.105169 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124435 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.124497 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.128583 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.144098 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.168414 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.189158 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.205372 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.221554 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227256 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227272 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.227313 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.247289 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.266062 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.280052 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b669cf4-28b3-484f-925b-49d6fab4e165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329475 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329520 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.329538 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.426277 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:09:54.38791185 +0000 UTC Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433187 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433217 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.433238 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535756 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535813 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.535823 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.637929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638151 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638159 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638172 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.638180 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740901 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740909 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.740929 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.843943 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858824 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.858865 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.880222 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.884748 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.904911 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908868 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.908927 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.928180 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931726 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.931810 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.949101 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952649 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952670 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.952703 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.964098 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:55Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:55 crc kubenswrapper[4739]: E0218 14:00:55.964325 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966271 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966356 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:55 crc kubenswrapper[4739]: I0218 14:00:55.966398 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:55Z","lastTransitionTime":"2026-02-18T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069177 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069211 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.069241 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171468 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171514 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.171535 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274559 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.274609 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377705 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.377742 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409378 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409439 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409480 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409614 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.409640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409751 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409854 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:56 crc kubenswrapper[4739]: E0218 14:00:56.409955 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.427356 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:59:07.633339544 +0000 UTC Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.480510 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583728 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583841 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.583862 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687561 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.687580 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791396 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791418 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791489 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.791515 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894471 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.894604 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997617 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:56 crc kubenswrapper[4739]: I0218 14:00:56.997746 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:56Z","lastTransitionTime":"2026-02-18T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.100380 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203609 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203625 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.203662 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.305836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.305965 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.305994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.306020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.306038 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410386 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410404 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.410481 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.428232 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:33:31.360867868 +0000 UTC Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513522 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513657 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.513712 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.616957 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617030 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.617086 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719877 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.719982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.720001 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823002 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823060 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.823117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926433 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926544 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:57 crc kubenswrapper[4739]: I0218 14:00:57.926625 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:57Z","lastTransitionTime":"2026-02-18T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029819 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029874 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.029881 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133222 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133282 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133322 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.133338 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236964 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.236990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.237001 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.339947 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340074 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.340115 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.409869 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.410192 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.409970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.410385 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:00:58 crc kubenswrapper[4739]: E0218 14:00:58.410558 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.428566 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:38:47.845607318 +0000 UTC Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.428737 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"151d76ab-14d7-4b0b-a930-785156818a3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mx99g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-nhkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443566 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443626 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.443690 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.447638 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b669cf4-28b3-484f-925b-49d6fab4e165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://734348fbaddb1f1106c5f33316276e3e4b941e731084a8379fd9bcef39a5f687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eadc9da4d34341452973f7f10abd33b15c3e8f21b8a71878a055c77c9cbf043d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.469843 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"174152a1-0b5b-44b6-8259-9268923bf099\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ad48569f75326187d274569e8ea151c835211e9b24a9a27925eef419be8affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb3ec0097ad793d26a2f5749dc2a3917daedcb73eac7558b0f05c4763b5f8d6c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b2cfff33daaba90d3318590ed4ae0cd2bfb6a9b495e4efdb68932d71cb4b2d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.488556 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.511489 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"617869cd-510c-4491-a8f7-1a7bb2656f26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a21e42ffcc7086675f09da09dacb6d130f0601725359d5d622e56e405fc175d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c81afbfa4eb17e5c23c0dcea7cabd7bf9cb242d975e07ef154a4394d7da0cb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e6a5b1bc75ae0c7e16cdf2d4d202261d8334276093b729c3edc970aab4c669b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2549a3f0d2ef919597f1da83dbe87576623e7911da2a7a6ebf00a5beae9bb148\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e0709f5274fc8193ac5084289ab013c64ace6dea7b3baded0c66efe23decd5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c5cbce03921eb38ef4987e3d84a466e9e48fab38168c8590edef43b7efaa578\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64a612d318b8c505372dbc3a6459a5c56d7cd0b22332bbb0be2428ec5df5533e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-875sv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ltvvj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.544599 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:53Z\\\",\\\"message\\\":\\\"go:551] Creating *factory.egressNode crc took: 6.568708ms\\\\nI0218 14:00:53.294863 6867 factory.go:1336] Added *v1.Node event handler 7\\\\nI0218 14:00:53.294912 6867 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 14:00:53.294940 6867 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/check-endpoints\\\\\\\"}\\\\nI0218 14:00:53.294960 6867 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 2.349587ms\\\\nI0218 14:00:53.294984 6867 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 14:00:53.295010 6867 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 14:00:53.295052 6867 factory.go:656] Stopping watch factory\\\\nI0218 14:00:53.295091 6867 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 14:00:53.295111 6867 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 14:00:53.295330 6867 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 14:00:53.295466 6867 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 14:00:53.295513 6867 ovnkube.go:599] Stopped ovnkube\\\\nI0218 14:00:53.295541 6867 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 14:00:53.295620 6867 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T14:00:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtd5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-x4j94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545664 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545751 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.545765 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.560926 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdde800e-9fbf-44dc-af43-d9cfc15dfecd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e29d67f1a73a7f769b66e8f3aff0d85addd20f1e9380a613da33401b9c116733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a462bede84d2d3dda8669c31184255e983a29f01e59f3d0d8df19bf140138f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-99ghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T14:00:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9rjzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.578828 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b63adf-c60d-4c1e-88dd-3316c9c01ea2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62776111add44cc4962fc56acaa6697bf75b0b3954bf137b91721bdb0673328b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b36d898e983eb57fc61b9d80a8bace5056c8612817cacc5ec4bf2a155647ca5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24204b574214fd132c4600c72d6efea99d8781e63feeb0ab418a3248413909f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://846f0862c642331e51668a9ee76c2d264df8beb36bdebc9986828f7dff08e328\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.601264 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368d9a8d97bbc64395450ed60d0106fdc56e4e4e919c871dc6eca26d27adafdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e1daa36b2af22ab825fc2fad2e12874920bb462db1a880b75dcf7d82fab6137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.616364 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54cd68a18261f70977a57060399ba5db95bddb66c7337b549c0d6f8cc088e978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.633480 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"947a1bc9-4557-4cd9-aa90-9d3893aad914\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c44da96521b8c0023168e972c81c827276875287a9013b6c0c0f4b12abc9a801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mc7b4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649214 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.649332 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.666161 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b157ccf5-6a41-4aba-9409-7631a9e1ea10\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20efc5d146f8b86b03d1c0e0857165a64c0a9976eb095423c42000a40ff21a71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8074ff5bd5d340b3c146201b307bb1a6f0e75e08e301269fd47cbe2b2478b43b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a7f5532562de8564e19e5590f2dab1792948fc545bb7b2ffc49d05faca90b28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e39873f4f22ea0b8e448ced38da14935ab8af979d6dbd81e4c60fabcbce6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://052ac81897865101bf890064f99ce5a0ec798abc3fc8b0c9f6f8fbc92fce1f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6cd600bb43a22ae8e2bbb5fe4f0c142c04d1e0caf6b3ecaf23967cec2f824d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31e119428d86405db81f15e733f1982cbacd790e8ec6371a6d4e4f7247741ec5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b339f825803cd58f0317b9fbac7d4fd0971df118de22b009323cdb21efeb85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.687598 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T13:59:47Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0218 13:59:42.200390 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 13:59:42.201145 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1958040301/tls.crt::/tmp/serving-cert-1958040301/tls.key\\\\\\\"\\\\nI0218 13:59:47.559986 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 13:59:47.565379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 13:59:47.565653 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 13:59:47.565706 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 13:59:47.565716 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 13:59:47.575854 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 13:59:47.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 13:59:47.575889 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 13:59:47.575914 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 13:59:47.575918 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 13:59:47.575921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 13:59:47.575955 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 13:59:47.579417 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T13:59:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.703898 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p98v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15ef6462-8149-4976-b2f8-26123d8081ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7a352fd175c208d8355b53a7ba65d10f6a47033e4a526ce96d9e22b04e0ba47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4gwp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p98v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.722992 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-h9slg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec8fd6de-f77b-48a7-848f-a1b94e866365\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T14:00:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T14:00:43Z\\\",\\\"message\\\":\\\"2026-02-18T13:59:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b\\\\n2026-02-18T13:59:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_fbf8d1bc-7ca2-4bf1-8d16-d0fc153f241b to /host/opt/cni/bin/\\\\n2026-02-18T13:59:58Z [verbose] multus-daemon started\\\\n2026-02-18T13:59:58Z [verbose] Readiness Indicator file check\\\\n2026-02-18T14:00:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T14:00:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lsrwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:54Z\\\"}}\" for pod \"openshift-multus\"/\"multus-h9slg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.739714 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.752961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753038 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.753082 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.762646 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c070d8f802ada42836f8a0cdb33d06ca3f7f2b32e968edd0ce65e506101d024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.782239 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.799922 4739 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mdk59" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef364cd3-8b0e-4ebb-96a9-f660f4dd776a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T13:59:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2cb97b083fc6acf67441bae694ff7811e61d0eeb270c264a525d7e3bef7094e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T13:59:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6csts\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T13:59:53Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mdk59\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:00:58Z is after 2025-08-24T17:21:41Z" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.855991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856044 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.856096 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959175 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959194 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:58 crc kubenswrapper[4739]: I0218 14:00:58.959227 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:58Z","lastTransitionTime":"2026-02-18T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062267 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062384 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.062406 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166199 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166239 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166250 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.166302 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269866 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269930 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.269982 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.270003 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372370 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.372608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.429132 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:22:47.694989061 +0000 UTC Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475202 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475252 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.475291 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577483 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577555 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577576 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577606 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.577626 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685261 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685332 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685343 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.685367 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.788941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789073 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.789204 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891567 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891619 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891633 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.891671 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995389 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995478 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995500 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995530 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:00:59 crc kubenswrapper[4739]: I0218 14:00:59.995549 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:00:59Z","lastTransitionTime":"2026-02-18T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098403 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098594 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.098664 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201972 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.201990 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.202002 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304665 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.304700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407592 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407600 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.407621 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.411614 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.411747 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.411965 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.412019 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.412120 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.412169 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.412263 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:00 crc kubenswrapper[4739]: E0218 14:01:00.412314 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.429830 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:46:23.394490779 +0000 UTC Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.510515 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.510910 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.511079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.511208 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.511337 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614808 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.614946 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718943 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.718992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.719013 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821707 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821752 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.821766 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923729 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923885 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:00 crc kubenswrapper[4739]: I0218 14:01:00.923990 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:00Z","lastTransitionTime":"2026-02-18T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027066 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027540 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027703 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.027867 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.028022 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.130888 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.130963 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.130981 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.131007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.131025 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234040 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234098 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234129 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.234139 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337427 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.337980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.338061 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.430908 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:40:54.724329884 +0000 UTC Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443899 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443959 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443975 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.443985 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546229 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.546253 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.648980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649789 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.649925 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.752946 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.752983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.752992 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.753006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.753017 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855299 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.855327 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958227 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958305 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958355 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:01 crc kubenswrapper[4739]: I0218 14:01:01.958372 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:01Z","lastTransitionTime":"2026-02-18T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061094 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061147 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.061202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164137 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164161 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164191 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.164213 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267041 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267176 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.267198 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370381 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370390 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370406 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.370414 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410490 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.410610 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410735 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.410781 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.410838 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.410965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:02 crc kubenswrapper[4739]: E0218 14:01:02.411073 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.431586 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:44:08.554272092 +0000 UTC Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473269 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473652 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473805 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.473993 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.474162 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.576826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577295 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.577700 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680803 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680821 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.680832 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783775 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783816 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783842 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.783857 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.886431 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989535 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989577 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:02 crc kubenswrapper[4739]: I0218 14:01:02.989629 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:02Z","lastTransitionTime":"2026-02-18T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093095 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093146 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093163 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093185 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.093202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195846 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195872 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195881 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.195904 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299350 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.299375 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402294 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402351 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402391 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.402409 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.432274 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:28:22.250237621 +0000 UTC Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505770 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505849 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.505883 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608233 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608278 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608290 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608309 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.608321 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711551 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711612 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711629 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.711674 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814072 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814085 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814102 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.814115 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917848 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917865 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:03 crc kubenswrapper[4739]: I0218 14:01:03.917905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:03Z","lastTransitionTime":"2026-02-18T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020921 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.020962 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.123962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124039 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.124054 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227366 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227426 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227477 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.227529 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329743 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329836 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329864 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.329887 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.409908 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.409950 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410040 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.410067 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410133 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.410175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410370 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:04 crc kubenswrapper[4739]: E0218 14:01:04.410404 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.432395 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:30:47.777927803 +0000 UTC Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433054 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433128 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433145 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.433214 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536198 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.536254 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638834 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638905 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638939 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.638987 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742024 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.742109 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.845800 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949203 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949273 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949328 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:04 crc kubenswrapper[4739]: I0218 14:01:04.949349 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:04Z","lastTransitionTime":"2026-02-18T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052732 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.052778 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155242 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155301 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155320 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.155394 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257573 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257637 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257678 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.257693 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360186 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.360202 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.432541 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:43:56.719541515 +0000 UTC Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462614 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462666 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462721 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.462741 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565360 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565369 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.565391 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668539 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668574 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.668587 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771014 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.771092 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873550 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873613 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.873622 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.976929 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.976983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.976998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.977017 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:05 crc kubenswrapper[4739]: I0218 14:01:05.977029 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:05Z","lastTransitionTime":"2026-02-18T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031144 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031231 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031265 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.031288 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.052067 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057008 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057060 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.057085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.073324 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.077961 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.078069 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.093281 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097812 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097890 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.097905 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.116910 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121019 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121076 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121090 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.121120 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.137355 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T14:01:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"90b9be3f-f663-4169-ae17-5b48d37fe9e4\\\",\\\"systemUUID\\\":\\\"d786f2bd-7712-4d82-a689-cbffdaab4e85\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T14:01:06Z is after 2025-08-24T17:21:41Z" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.138238 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139873 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139923 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139934 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139952 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.139966 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.242170 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.344708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345071 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345086 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345105 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.345117 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410465 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.410590 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410742 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410819 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:06 crc kubenswrapper[4739]: E0218 14:01:06.410938 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.433188 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:32:09.278404774 +0000 UTC Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447297 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447347 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447363 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.447373 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549807 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549863 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549878 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.549887 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651875 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651954 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651977 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.651992 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755624 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755709 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755734 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.755791 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858496 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858604 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.858659 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961316 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961413 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961479 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:06 crc kubenswrapper[4739]: I0218 14:01:06.961503 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:06Z","lastTransitionTime":"2026-02-18T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065083 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065106 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065134 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.065155 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168268 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168337 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168378 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168417 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.168473 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271349 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271419 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271473 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271507 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.271529 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374502 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374547 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374563 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.374597 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.433373 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:46:37.253462173 +0000 UTC Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477607 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.477625 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580132 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580166 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.580188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683377 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683425 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.683483 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786154 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786196 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.786279 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889383 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889495 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889521 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889549 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.889569 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992379 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992420 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992432 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992466 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:07 crc kubenswrapper[4739]: I0218 14:01:07.992479 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:07Z","lastTransitionTime":"2026-02-18T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094224 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094263 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094289 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.094299 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197373 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197438 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197486 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.197526 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.299887 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.299962 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.299994 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.300026 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.300049 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403644 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403697 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403713 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403735 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.403752 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.410774 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.410985 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.411058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.411084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.411133 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.411108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.411251 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:08 crc kubenswrapper[4739]: E0218 14:01:08.411592 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.434120 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:35:22.116945532 +0000 UTC Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.473017 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mdk59" podStartSLOduration=75.47299088 podStartE2EDuration="1m15.47299088s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.472817245 +0000 UTC m=+100.968538217" watchObservedRunningTime="2026-02-18 14:01:08.47299088 +0000 UTC m=+100.968711842" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.502072 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h9slg" podStartSLOduration=74.50205181 podStartE2EDuration="1m14.50205181s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.489669142 +0000 UTC m=+100.985390104" watchObservedRunningTime="2026-02-18 14:01:08.50205181 +0000 UTC m=+100.997772732" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507162 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507296 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507317 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507339 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.507397 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.558706 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ltvvj" podStartSLOduration=74.558682905 podStartE2EDuration="1m14.558682905s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.557665381 +0000 UTC m=+101.053386333" watchObservedRunningTime="2026-02-18 14:01:08.558682905 +0000 UTC m=+101.054403847" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.597884 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.59786748 podStartE2EDuration="15.59786748s" podCreationTimestamp="2026-02-18 14:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.582737435 +0000 UTC m=+101.078458367" watchObservedRunningTime="2026-02-18 14:01:08.59786748 +0000 UTC m=+101.093588402" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.598228 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=81.598224979 podStartE2EDuration="1m21.598224979s" podCreationTimestamp="2026-02-18 13:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.59785177 +0000 UTC m=+101.093572722" watchObservedRunningTime="2026-02-18 14:01:08.598224979 +0000 UTC m=+101.093945901" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610238 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610274 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610287 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610303 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.610316 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.620746 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podStartSLOduration=74.620727421 podStartE2EDuration="1m14.620727421s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.620482975 +0000 UTC m=+101.116203917" watchObservedRunningTime="2026-02-18 14:01:08.620727421 +0000 UTC m=+101.116448343" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.666785 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9rjzr" podStartSLOduration=74.666769231 podStartE2EDuration="1m14.666769231s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.666767621 +0000 UTC m=+101.162488553" watchObservedRunningTime="2026-02-18 14:01:08.666769231 +0000 UTC m=+101.162490153" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.698094 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.698074926 podStartE2EDuration="49.698074926s" podCreationTimestamp="2026-02-18 14:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.682432299 +0000 UTC m=+101.178153261" watchObservedRunningTime="2026-02-18 14:01:08.698074926 +0000 UTC m=+101.193795868" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.710817 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p98v4" podStartSLOduration=75.710796453 podStartE2EDuration="1m15.710796453s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.710358802 +0000 UTC m=+101.206079754" watchObservedRunningTime="2026-02-18 14:01:08.710796453 +0000 UTC m=+101.206517385" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712099 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712264 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712362 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712482 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.712608 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.732056 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.732034885 podStartE2EDuration="1m18.732034885s" podCreationTimestamp="2026-02-18 13:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.731066831 +0000 UTC m=+101.226787793" watchObservedRunningTime="2026-02-18 14:01:08.732034885 +0000 UTC m=+101.227755827" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.745386 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.745362396 podStartE2EDuration="1m21.745362396s" podCreationTimestamp="2026-02-18 13:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:08.74471782 +0000 UTC m=+101.240438762" watchObservedRunningTime="2026-02-18 14:01:08.745362396 +0000 UTC m=+101.241083358" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814787 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814853 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814871 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814895 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.814914 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918358 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918434 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918488 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918519 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:08 crc kubenswrapper[4739]: I0218 14:01:08.918541 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:08Z","lastTransitionTime":"2026-02-18T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021537 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021568 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021578 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.021811 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125043 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125110 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125136 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125168 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.125190 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.227995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228064 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228082 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228108 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.228125 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331047 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331119 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331138 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331164 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.331183 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.411057 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:01:09 crc kubenswrapper[4739]: E0218 14:01:09.411360 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434267 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:35:35.503884896 +0000 UTC Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434826 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434883 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.434959 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538610 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538668 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538691 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538722 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.538743 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.640916 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.640985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.640998 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.641013 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.641027 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744205 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744336 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.744360 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.846914 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.846971 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.846987 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.847010 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.847026 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950142 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950182 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950192 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950209 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:09 crc kubenswrapper[4739]: I0218 14:01:09.950220 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:09Z","lastTransitionTime":"2026-02-18T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052814 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052832 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.052872 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155708 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155767 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155779 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155796 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.155807 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258855 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258958 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.258985 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.259007 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362582 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362655 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362672 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362698 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.362714 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.409713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.409713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.409862 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410042 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.410106 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410224 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410396 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:10 crc kubenswrapper[4739]: E0218 14:01:10.410624 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.435153 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 05:32:09.778551616 +0000 UTC Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466000 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466048 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466059 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466079 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.466093 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569223 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569344 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569368 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569395 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.569421 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672077 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672149 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672171 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.672188 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774782 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774854 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774879 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774906 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.774926 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877315 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877367 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877380 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877400 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.877412 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.979948 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980006 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:10 crc kubenswrapper[4739]: I0218 14:01:10.980059 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:10Z","lastTransitionTime":"2026-02-18T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.082917 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083032 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083063 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.083085 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186364 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186523 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186554 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.186575 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289862 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289920 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289941 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.289992 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393768 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393799 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393844 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.393870 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.435522 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:00:08.088607268 +0000 UTC Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497583 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497682 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497715 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.497739 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600798 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600843 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600882 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.600893 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703788 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.703827 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808112 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808160 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808195 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.808229 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911348 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911411 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911430 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911510 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:11 crc kubenswrapper[4739]: I0218 14:01:11.911543 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:11Z","lastTransitionTime":"2026-02-18T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013593 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013632 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013642 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013656 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.013667 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.115421 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.115810 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.115935 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.116045 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.116145 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219052 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219415 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.219900 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.220103 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.322860 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323215 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323416 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323685 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.323900 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.409419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.409487 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.409563 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.410222 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.409885 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.410288 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.410360 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.410474 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.426908 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.426967 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.427022 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.427053 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.427078 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.436639 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:25:34.814857393 +0000 UTC Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529304 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529372 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529399 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529431 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.529501 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632595 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632615 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.632658 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.720153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.720360 4739 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:01:12 crc kubenswrapper[4739]: E0218 14:01:12.720532 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs podName:151d76ab-14d7-4b0b-a930-785156818a3e nodeName:}" failed. No retries permitted until 2026-02-18 14:02:16.720439083 +0000 UTC m=+169.216160045 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs") pod "network-metrics-daemon-nhkmm" (UID: "151d76ab-14d7-4b0b-a930-785156818a3e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736293 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736382 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736409 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.736430 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839598 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839640 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.839658 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942704 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942800 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:12 crc kubenswrapper[4739]: I0218 14:01:12.942819 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:12Z","lastTransitionTime":"2026-02-18T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046131 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046206 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046220 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.046261 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149591 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149659 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149676 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.149690 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.251970 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252012 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252020 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252034 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.252044 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354216 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354281 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354300 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354324 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.354342 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.437512 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:56:10.888610682 +0000 UTC Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456397 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456494 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456505 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456529 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.456541 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560536 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560562 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560596 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.560620 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664393 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664474 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664487 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664506 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.664519 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768532 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768590 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768608 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.768647 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871189 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871212 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871240 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.871259 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974080 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974210 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974228 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974280 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:13 crc kubenswrapper[4739]: I0218 14:01:13.974301 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:13Z","lastTransitionTime":"2026-02-18T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077630 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077681 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077693 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077712 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.077724 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180823 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180861 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180870 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180889 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.180899 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284257 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284312 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284326 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284346 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.284359 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386636 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386747 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386831 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.386848 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409601 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409695 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.409739 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.409838 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.409854 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.409966 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:14 crc kubenswrapper[4739]: E0218 14:01:14.410097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.437867 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:26:40.728225011 +0000 UTC Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489572 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489654 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489677 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489714 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.489737 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592675 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592761 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592785 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.592802 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695553 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695603 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695616 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695635 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.695649 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798122 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798190 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798213 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798243 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.798267 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.900988 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901049 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901061 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901078 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:14 crc kubenswrapper[4739]: I0218 14:01:14.901089 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:14Z","lastTransitionTime":"2026-02-18T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.003869 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.003980 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.003995 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.004016 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.004030 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107275 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107331 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107342 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107365 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.107378 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209673 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209758 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209778 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.209793 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312748 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312806 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312825 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312852 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.312869 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.416936 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.416991 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.417007 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.417023 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.417034 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.438896 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:48:20.132858156 +0000 UTC Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519683 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519727 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519769 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519792 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.519808 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622599 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622650 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622680 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.622692 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.725926 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.725983 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.725999 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.726021 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.726042 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829569 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829663 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829688 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829731 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.829748 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933046 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933100 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933113 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933130 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:15 crc kubenswrapper[4739]: I0218 14:01:15.933141 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:15Z","lastTransitionTime":"2026-02-18T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036246 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036313 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036329 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036352 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.036369 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139565 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139634 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139647 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139674 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.139688 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242628 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242696 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242719 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242750 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.242771 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311660 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311730 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311744 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311765 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.311779 4739 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T14:01:16Z","lastTransitionTime":"2026-02-18T14:01:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.375058 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq"] Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.375685 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.378385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.380008 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.380376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.380435 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.410826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.411042 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.411333 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.411423 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.411650 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.411678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.411786 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:16 crc kubenswrapper[4739]: E0218 14:01:16.412241 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.439857 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:50:08.322800461 +0000 UTC Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.439918 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.450271 4739 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.462512 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.462604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.462719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.463097 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.463206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564591 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564764 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.564952 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.566876 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.575610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.593629 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4ba118-afe3-4671-93a3-76c84f2bfcdf-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bjwtq\" (UID: \"7c4ba118-afe3-4671-93a3-76c84f2bfcdf\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.700268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.997119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" event={"ID":"7c4ba118-afe3-4671-93a3-76c84f2bfcdf","Type":"ContainerStarted","Data":"9957416afab0fb79b3fec857960d4f1681be8e0c4aa09a862d135e93a0e60639"} Feb 18 14:01:16 crc kubenswrapper[4739]: I0218 14:01:16.997228 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" event={"ID":"7c4ba118-afe3-4671-93a3-76c84f2bfcdf","Type":"ContainerStarted","Data":"7bc88fbbe4707321088da2378557fba0b9dc2706dfe57b74cb1194bdef3be1eb"} Feb 18 14:01:17 crc kubenswrapper[4739]: I0218 14:01:17.018237 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bjwtq" podStartSLOduration=83.01820987 podStartE2EDuration="1m23.01820987s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:17.017948354 +0000 UTC m=+109.513669336" watchObservedRunningTime="2026-02-18 14:01:17.01820987 +0000 UTC m=+109.513930822" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.409976 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.410039 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.409980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:18 crc kubenswrapper[4739]: I0218 14:01:18.410099 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.411701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.412429 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.412654 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:18 crc kubenswrapper[4739]: E0218 14:01:18.412786 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410459 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410680 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.410727 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410550 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.410899 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:20 crc kubenswrapper[4739]: I0218 14:01:20.410580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.411020 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:20 crc kubenswrapper[4739]: E0218 14:01:20.411081 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410370 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410311 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.410617 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.410637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.410745 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.411290 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.411387 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:22 crc kubenswrapper[4739]: I0218 14:01:22.411908 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:01:22 crc kubenswrapper[4739]: E0218 14:01:22.412184 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-x4j94_openshift-ovn-kubernetes(f04e1fa3-4bb9-41e9-bf1d-a2862fb63224)\"" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411069 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410518 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411340 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:24 crc kubenswrapper[4739]: I0218 14:01:24.410537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411662 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:24 crc kubenswrapper[4739]: E0218 14:01:24.411845 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.409964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.410001 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.410096 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410253 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:26 crc kubenswrapper[4739]: I0218 14:01:26.410572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410685 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:26 crc kubenswrapper[4739]: E0218 14:01:26.410929 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409838 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.409845 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.409983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:28 crc kubenswrapper[4739]: I0218 14:01:28.409665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.410145 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.410336 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.422958 4739 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 14:01:28 crc kubenswrapper[4739]: E0218 14:01:28.537687 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.040987 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.041824 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/0.log" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.041967 4739 generic.go:334] "Generic (PLEG): container finished" podID="ec8fd6de-f77b-48a7-848f-a1b94e866365" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" exitCode=1 Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.042060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerDied","Data":"c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67"} Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.042173 4739 scope.go:117] "RemoveContainer" containerID="f2c8be60a4ce3344cfbed98a4a81e6f22be7610d769e1509664f7c56fce6309c" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.042797 4739 scope.go:117] "RemoveContainer" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.043109 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-h9slg_openshift-multus(ec8fd6de-f77b-48a7-848f-a1b94e866365)\"" pod="openshift-multus/multus-h9slg" podUID="ec8fd6de-f77b-48a7-848f-a1b94e866365" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410095 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410159 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410275 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410310 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:30 crc kubenswrapper[4739]: I0218 14:01:30.410321 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410485 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410603 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:30 crc kubenswrapper[4739]: E0218 14:01:30.410764 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:31 crc kubenswrapper[4739]: I0218 14:01:31.047509 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.409919 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.410059 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.410165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410292 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410408 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:32 crc kubenswrapper[4739]: I0218 14:01:32.410489 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:32 crc kubenswrapper[4739]: E0218 14:01:32.410555 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:33 crc kubenswrapper[4739]: E0218 14:01:33.538714 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410696 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410925 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.410937 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.410914 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.411032 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.411120 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:34 crc kubenswrapper[4739]: E0218 14:01:34.411207 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:34 crc kubenswrapper[4739]: I0218 14:01:34.412318 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.062802 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.066927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerStarted","Data":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.067632 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.126745 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podStartSLOduration=101.126732052 podStartE2EDuration="1m41.126732052s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:35.125623035 +0000 UTC m=+127.621343977" watchObservedRunningTime="2026-02-18 14:01:35.126732052 +0000 UTC m=+127.622452974" Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.403485 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nhkmm"] Feb 18 14:01:35 crc kubenswrapper[4739]: I0218 14:01:35.403616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:35 crc kubenswrapper[4739]: E0218 14:01:35.403754 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:36 crc kubenswrapper[4739]: I0218 14:01:36.410488 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:36 crc kubenswrapper[4739]: E0218 14:01:36.410893 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:36 crc kubenswrapper[4739]: I0218 14:01:36.411204 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:36 crc kubenswrapper[4739]: E0218 14:01:36.411298 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:36 crc kubenswrapper[4739]: I0218 14:01:36.411558 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:36 crc kubenswrapper[4739]: E0218 14:01:36.411648 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:37 crc kubenswrapper[4739]: I0218 14:01:37.409268 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:37 crc kubenswrapper[4739]: E0218 14:01:37.409478 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:38 crc kubenswrapper[4739]: I0218 14:01:38.409873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:38 crc kubenswrapper[4739]: I0218 14:01:38.409954 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:38 crc kubenswrapper[4739]: I0218 14:01:38.410583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.411856 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.411946 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.412047 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:38 crc kubenswrapper[4739]: E0218 14:01:38.539610 4739 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:01:39 crc kubenswrapper[4739]: I0218 14:01:39.409745 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:39 crc kubenswrapper[4739]: E0218 14:01:39.409940 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.410354 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.410362 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:40 crc kubenswrapper[4739]: E0218 14:01:40.410613 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.410708 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:40 crc kubenswrapper[4739]: E0218 14:01:40.410736 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:40 crc kubenswrapper[4739]: E0218 14:01:40.410883 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:40 crc kubenswrapper[4739]: I0218 14:01:40.612122 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:01:41 crc kubenswrapper[4739]: I0218 14:01:41.410038 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:41 crc kubenswrapper[4739]: E0218 14:01:41.410491 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:41 crc kubenswrapper[4739]: I0218 14:01:41.410682 4739 scope.go:117] "RemoveContainer" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.093638 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.093694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1"} Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.409614 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.409687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:42 crc kubenswrapper[4739]: E0218 14:01:42.409818 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 14:01:42 crc kubenswrapper[4739]: I0218 14:01:42.409901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:42 crc kubenswrapper[4739]: E0218 14:01:42.410037 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 14:01:42 crc kubenswrapper[4739]: E0218 14:01:42.410070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 14:01:43 crc kubenswrapper[4739]: I0218 14:01:43.409561 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:43 crc kubenswrapper[4739]: E0218 14:01:43.409755 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nhkmm" podUID="151d76ab-14d7-4b0b-a930-785156818a3e" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.409966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.410037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.410492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.412547 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.413057 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.413084 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 14:01:44 crc kubenswrapper[4739]: I0218 14:01:44.417248 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 14:01:45 crc kubenswrapper[4739]: I0218 14:01:45.410158 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:01:45 crc kubenswrapper[4739]: I0218 14:01:45.412176 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 14:01:45 crc kubenswrapper[4739]: I0218 14:01:45.413942 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.191437 4739 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.238958 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n78q8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.240162 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.243721 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sqm9s"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.245422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.245807 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.247309 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.250014 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.250506 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.250933 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.251293 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.251756 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.252150 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.252546 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.252670 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.253389 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.260849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.261656 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.261924 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.263352 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.263383 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.263606 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.264024 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.264684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.265523 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.265695 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.266076 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.266319 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.266578 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275485 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.275677 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.276073 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.279175 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.286904 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.287147 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.287696 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.288068 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.288782 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.293049 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.302733 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.304623 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.306942 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.307103 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rtb8n"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.315464 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316140 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-image-import-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316182 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-serving-cert\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316221 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316244 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-config\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316265 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-node-pullsecrets\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316293 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-client\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316333 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcltt\" (UniqueName: \"kubernetes.io/projected/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-kube-api-access-mcltt\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smwfw\" (UniqueName: \"kubernetes.io/projected/d41d7405-9b25-414a-a247-1d945df68f89-kube-api-access-smwfw\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-encryption-config\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316426 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316475 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-encryption-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316514 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316535 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-audit\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-etcd-serving-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316596 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srsh9\" (UniqueName: \"kubernetes.io/projected/86f15b94-810d-4448-a663-fd8862f0e601-kube-api-access-srsh9\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.316655 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.317946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-policies\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318064 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41d7405-9b25-414a-a247-1d945df68f89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-dir\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318110 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-images\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318197 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318227 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-audit-dir\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318248 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxl96\" (UniqueName: \"kubernetes.io/projected/7a738e9a-0692-4476-b9ba-930e3bdc34d2-kube-api-access-vxl96\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-etcd-client\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-serving-cert\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.318309 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.323350 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.323932 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9zgsz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.324307 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.324590 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.324950 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.325317 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fqdjl"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.325617 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.325897 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.326269 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.326607 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.326676 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.327061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.327625 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.327689 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.328254 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.328521 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.328756 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329047 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329381 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329557 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329744 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.329846 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.330019 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.331403 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.332034 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.332426 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.332580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.333294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.343305 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sqm9s"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.343343 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b2m46"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.343800 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.344215 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.347306 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.351773 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352296 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n78q8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352322 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5cdhr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352644 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352708 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.352780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.355681 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.356471 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.360989 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.361570 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.377112 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.377493 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.377680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.378064 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.378319 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.378499 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.379694 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.380371 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zzrbt"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.381802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.382623 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383035 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383316 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383400 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383819 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.383802 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.385871 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.386425 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.386878 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.392437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.392753 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.393022 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.392840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.397657 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.406497 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.406880 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.407117 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.407356 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.407721 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.408315 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.408516 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.409121 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.409988 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.412639 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.412811 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.413458 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.413942 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.414122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.415571 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-464cg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.417192 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.417263 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-67w4c"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.417964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418749 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-encryption-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418805 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418824 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418846 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-audit\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-etcd-serving-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srsh9\" (UniqueName: \"kubernetes.io/projected/86f15b94-810d-4448-a663-fd8862f0e601-kube-api-access-srsh9\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418925 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84627667-4128-47e5-a611-c650633e8362-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418955 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-policies\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.418984 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a1340-9cce-4d5b-9cff-35d934fc4d71-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41d7405-9b25-414a-a247-1d945df68f89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhsc6\" (UniqueName: \"kubernetes.io/projected/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-kube-api-access-xhsc6\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4lmp\" (UniqueName: \"kubernetes.io/projected/3440ceb6-cf9c-4732-bafb-8a58d419276a-kube-api-access-v4lmp\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419056 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-dir\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419084 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419099 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-images\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xj8\" (UniqueName: \"kubernetes.io/projected/537a1340-9cce-4d5b-9cff-35d934fc4d71-kube-api-access-m6xj8\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419133 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84627667-4128-47e5-a611-c650633e8362-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-audit-dir\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxl96\" (UniqueName: \"kubernetes.io/projected/7a738e9a-0692-4476-b9ba-930e3bdc34d2-kube-api-access-vxl96\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-etcd-client\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-serving-cert\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419211 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419226 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhzb\" (UniqueName: \"kubernetes.io/projected/9d038913-f9eb-40ed-89a8-4687734573aa-kube-api-access-2zhzb\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84562f70-3466-4537-9761-33e3abcaacb9-proxy-tls\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419270 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419288 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-config\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419302 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-default-certificate\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-auth-proxy-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419332 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqwt\" (UniqueName: \"kubernetes.io/projected/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-kube-api-access-nqqwt\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419363 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-trusted-ca\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948709-692e-4ce2-b84a-55a87412856d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419412 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-image-import-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmps\" (UniqueName: \"kubernetes.io/projected/84627667-4128-47e5-a611-c650633e8362-kube-api-access-9bmps\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419457 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419474 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-serving-cert\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419499 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ds6x\" (UniqueName: \"kubernetes.io/projected/84562f70-3466-4537-9761-33e3abcaacb9-kube-api-access-5ds6x\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419553 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420051 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.419555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420246 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8cbc\" (UniqueName: \"kubernetes.io/projected/b4948709-692e-4ce2-b84a-55a87412856d-kube-api-access-r8cbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-config\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420294 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84562f70-3466-4537-9761-33e3abcaacb9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-stats-auth\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-node-pullsecrets\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgsm\" (UniqueName: \"kubernetes.io/projected/07036c39-40f5-4969-afd0-1003c1eae037-kube-api-access-sxgsm\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a1340-9cce-4d5b-9cff-35d934fc4d71-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-client\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcltt\" (UniqueName: \"kubernetes.io/projected/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-kube-api-access-mcltt\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07036c39-40f5-4969-afd0-1003c1eae037-serving-cert\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948709-692e-4ce2-b84a-55a87412856d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-service-ca-bundle\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d038913-f9eb-40ed-89a8-4687734573aa-machine-approver-tls\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smwfw\" (UniqueName: \"kubernetes.io/projected/d41d7405-9b25-414a-a247-1d945df68f89-kube-api-access-smwfw\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-encryption-config\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-srv-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzb79\" (UniqueName: \"kubernetes.io/projected/52fa7608-a369-4813-8a4d-3e2f8b84c885-kube-api-access-mzb79\") pod \"migrator-59844c95c7-4lvb5\" (UID: \"52fa7608-a369-4813-8a4d-3e2f8b84c885\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.420679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-metrics-certs\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.423778 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-node-pullsecrets\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.425084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.428669 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.431162 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434472 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-audit\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434738 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434777 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434931 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-etcd-serving-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.434984 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f15b94-810d-4448-a663-fd8862f0e601-audit-dir\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.435041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-dir\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436148 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436291 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436611 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.436726 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.437238 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.432112 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.432169 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.441529 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mxwhp"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442975 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.443192 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.443263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.442972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86f15b94-810d-4448-a663-fd8862f0e601-image-import-ca\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.443508 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.444969 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.445492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-config\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.445647 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d41d7405-9b25-414a-a247-1d945df68f89-images\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.446685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.447004 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.448990 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.449159 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.449266 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450540 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450684 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450763 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450793 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450887 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.450997 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.451105 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.451340 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.452938 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.453094 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454289 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454345 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454378 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454491 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454615 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454721 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.454806 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.455132 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.456718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-etcd-client\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.457287 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.461155 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d41d7405-9b25-414a-a247-1d945df68f89-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.479971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-serving-cert\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.480394 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.483874 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485235 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86f15b94-810d-4448-a663-fd8862f0e601-encryption-config\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485400 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485595 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485753 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.485977 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.486542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-client\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.487154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fbnbw"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.492783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-serving-cert\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.495778 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.495879 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496317 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496556 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.496876 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.498230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7a738e9a-0692-4476-b9ba-930e3bdc34d2-encryption-config\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.498751 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.500042 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.502608 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.504113 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.504266 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.505172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-audit-policies\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.505536 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.506381 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.508125 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9zgsz"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.508236 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.509167 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.513470 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.513665 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.513972 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.515263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7a738e9a-0692-4476-b9ba-930e3bdc34d2-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.516055 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fjgwd"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.516860 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.517468 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.518390 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rtb8n"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.519487 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.519612 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.521596 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07036c39-40f5-4969-afd0-1003c1eae037-serving-cert\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948709-692e-4ce2-b84a-55a87412856d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522396 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-service-ca-bundle\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d038913-f9eb-40ed-89a8-4687734573aa-machine-approver-tls\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-srv-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzb79\" (UniqueName: \"kubernetes.io/projected/52fa7608-a369-4813-8a4d-3e2f8b84c885-kube-api-access-mzb79\") pod \"migrator-59844c95c7-4lvb5\" (UID: \"52fa7608-a369-4813-8a4d-3e2f8b84c885\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-metrics-certs\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84627667-4128-47e5-a611-c650633e8362-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522589 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a1340-9cce-4d5b-9cff-35d934fc4d71-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522604 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4lmp\" (UniqueName: \"kubernetes.io/projected/3440ceb6-cf9c-4732-bafb-8a58d419276a-kube-api-access-v4lmp\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhsc6\" (UniqueName: \"kubernetes.io/projected/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-kube-api-access-xhsc6\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6xj8\" (UniqueName: \"kubernetes.io/projected/537a1340-9cce-4d5b-9cff-35d934fc4d71-kube-api-access-m6xj8\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84627667-4128-47e5-a611-c650633e8362-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhzb\" (UniqueName: \"kubernetes.io/projected/9d038913-f9eb-40ed-89a8-4687734573aa-kube-api-access-2zhzb\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522704 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-auth-proxy-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522720 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522734 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84562f70-3466-4537-9761-33e3abcaacb9-proxy-tls\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-config\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-default-certificate\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948709-692e-4ce2-b84a-55a87412856d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqwt\" (UniqueName: \"kubernetes.io/projected/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-kube-api-access-nqqwt\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-trusted-ca\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bmps\" (UniqueName: \"kubernetes.io/projected/84627667-4128-47e5-a611-c650633e8362-kube-api-access-9bmps\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522860 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ds6x\" (UniqueName: \"kubernetes.io/projected/84562f70-3466-4537-9761-33e3abcaacb9-kube-api-access-5ds6x\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522874 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522921 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8cbc\" (UniqueName: \"kubernetes.io/projected/b4948709-692e-4ce2-b84a-55a87412856d-kube-api-access-r8cbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522937 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84562f70-3466-4537-9761-33e3abcaacb9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxgsm\" (UniqueName: \"kubernetes.io/projected/07036c39-40f5-4969-afd0-1003c1eae037-kube-api-access-sxgsm\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-stats-auth\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.522988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a1340-9cce-4d5b-9cff-35d934fc4d71-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4948709-692e-4ce2-b84a-55a87412856d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523121 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523673 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537a1340-9cce-4d5b-9cff-35d934fc4d71-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.523923 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-auth-proxy-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.524010 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d038913-f9eb-40ed-89a8-4687734573aa-config\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.524927 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-config\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.525290 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.525717 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07036c39-40f5-4969-afd0-1003c1eae037-serving-cert\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.526250 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.526829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07036c39-40f5-4969-afd0-1003c1eae037-trusted-ca\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.526974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84562f70-3466-4537-9761-33e3abcaacb9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.527326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9d038913-f9eb-40ed-89a8-4687734573aa-machine-approver-tls\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.527366 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.530016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.530568 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4948709-692e-4ce2-b84a-55a87412856d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.531313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.532361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a1340-9cce-4d5b-9cff-35d934fc4d71-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.532589 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.534169 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.535728 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.536883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b2m46"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.537996 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.539432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.541271 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.541885 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.543379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.545671 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.548827 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.549864 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q8t8f"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.551066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.551189 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8lgk6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.552126 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.552833 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.553921 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-464cg"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.554993 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.556068 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.557289 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-67w4c"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.558331 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fbnbw"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.559424 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.560841 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fqdjl"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.562777 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zzrbt"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.562921 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.564800 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.565885 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8lgk6"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.568966 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q8t8f"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.571949 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mxwhp"] Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.582261 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.602273 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.622253 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.642558 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.662353 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.682456 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.703957 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.722958 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.742478 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.763046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.790494 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.799228 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84562f70-3466-4537-9761-33e3abcaacb9-proxy-tls\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.803437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.822955 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.828475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-default-certificate\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.843302 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.844184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-service-ca-bundle\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.862889 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.869683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-metrics-certs\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.882349 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.890503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-stats-auth\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.902499 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.923168 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.942379 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.962603 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 14:01:47 crc kubenswrapper[4739]: I0218 14:01:47.982031 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.003835 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.023654 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.043363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.062238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.069703 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84627667-4128-47e5-a611-c650633e8362-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.083215 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.103738 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.123846 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.143016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.151804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-srv-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.162887 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.182664 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.203417 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.223109 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.228702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.243041 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.263766 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.283409 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.287049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84627667-4128-47e5-a611-c650633e8362-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.304553 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.322534 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.342853 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.363616 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.383683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.388932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.416018 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.421647 4739 request.go:700] Waited for 1.014307486s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.423560 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.427838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.443655 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.464417 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.482432 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.503908 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.523826 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525785 4739 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525839 4739 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525895 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert podName:3440ceb6-cf9c-4732-bafb-8a58d419276a nodeName:}" failed. No retries permitted until 2026-02-18 14:01:49.02586516 +0000 UTC m=+141.521586152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert") pod "service-ca-operator-777779d784-zwjnk" (UID: "3440ceb6-cf9c-4732-bafb-8a58d419276a") : failed to sync secret cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: E0218 14:01:48.525921 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config podName:3440ceb6-cf9c-4732-bafb-8a58d419276a nodeName:}" failed. No retries permitted until 2026-02-18 14:01:49.025910251 +0000 UTC m=+141.521631193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config") pod "service-ca-operator-777779d784-zwjnk" (UID: "3440ceb6-cf9c-4732-bafb-8a58d419276a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.542372 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.563305 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.582998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.603096 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.622277 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.643292 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.682723 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.703432 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.723594 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.742840 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.773415 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.782703 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.802399 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.822721 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.842482 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.863008 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.882415 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.903764 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.923461 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.943536 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:01:48 crc kubenswrapper[4739]: I0218 14:01:48.989699 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcltt\" (UniqueName: \"kubernetes.io/projected/6a73ee03-bb76-478c-bcd1-2d08f0e6f538-kube-api-access-mcltt\") pod \"openshift-config-operator-7777fb866f-6jxsc\" (UID: \"6a73ee03-bb76-478c-bcd1-2d08f0e6f538\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.013573 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"route-controller-manager-6576b87f9c-hkhdz\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.026604 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srsh9\" (UniqueName: \"kubernetes.io/projected/86f15b94-810d-4448-a663-fd8862f0e601-kube-api-access-srsh9\") pod \"apiserver-76f77b778f-n78q8\" (UID: \"86f15b94-810d-4448-a663-fd8862f0e601\") " pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.040770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.041117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.041915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3440ceb6-cf9c-4732-bafb-8a58d419276a-config\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.046746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3440ceb6-cf9c-4732-bafb-8a58d419276a-serving-cert\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.049424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxl96\" (UniqueName: \"kubernetes.io/projected/7a738e9a-0692-4476-b9ba-930e3bdc34d2-kube-api-access-vxl96\") pod \"apiserver-7bbb656c7d-44mk7\" (UID: \"7a738e9a-0692-4476-b9ba-930e3bdc34d2\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.064182 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.071980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smwfw\" (UniqueName: \"kubernetes.io/projected/d41d7405-9b25-414a-a247-1d945df68f89-kube-api-access-smwfw\") pod \"machine-api-operator-5694c8668f-sqm9s\" (UID: \"d41d7405-9b25-414a-a247-1d945df68f89\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.081545 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.083761 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.084898 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.103936 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.118114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.123805 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.141649 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.142984 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.177469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"controller-manager-879f6c89f-lbspb\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.181229 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.202851 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.225066 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.245357 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.265238 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.283231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.302622 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.324626 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.368974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzb79\" (UniqueName: \"kubernetes.io/projected/52fa7608-a369-4813-8a4d-3e2f8b84c885-kube-api-access-mzb79\") pod \"migrator-59844c95c7-4lvb5\" (UID: \"52fa7608-a369-4813-8a4d-3e2f8b84c885\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.385815 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"marketplace-operator-79b997595-c4w7p\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.400546 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6xj8\" (UniqueName: \"kubernetes.io/projected/537a1340-9cce-4d5b-9cff-35d934fc4d71-kube-api-access-m6xj8\") pod \"openshift-apiserver-operator-796bbdcf4f-wbqrx\" (UID: \"537a1340-9cce-4d5b-9cff-35d934fc4d71\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.419250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4lmp\" (UniqueName: \"kubernetes.io/projected/3440ceb6-cf9c-4732-bafb-8a58d419276a-kube-api-access-v4lmp\") pod \"service-ca-operator-777779d784-zwjnk\" (UID: \"3440ceb6-cf9c-4732-bafb-8a58d419276a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.439274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhzb\" (UniqueName: \"kubernetes.io/projected/9d038913-f9eb-40ed-89a8-4687734573aa-kube-api-access-2zhzb\") pod \"machine-approver-56656f9798-tz66n\" (UID: \"9d038913-f9eb-40ed-89a8-4687734573aa\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.442563 4739 request.go:700] Waited for 1.916968013s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.453127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.453589 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.457903 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhsc6\" (UniqueName: \"kubernetes.io/projected/b6cef9b9-56ee-4d0a-8c13-651e3f649a0e-kube-api-access-xhsc6\") pod \"router-default-5444994796-5cdhr\" (UID: \"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e\") " pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.460670 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.465335 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.476771 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.479607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqwt\" (UniqueName: \"kubernetes.io/projected/9c1d88a8-7aa9-413f-81cc-5a4852b2691b-kube-api-access-nqqwt\") pod \"olm-operator-6b444d44fb-f4xd7\" (UID: \"9c1d88a8-7aa9-413f-81cc-5a4852b2691b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.495772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8cbc\" (UniqueName: \"kubernetes.io/projected/b4948709-692e-4ce2-b84a-55a87412856d-kube-api-access-r8cbc\") pod \"openshift-controller-manager-operator-756b6f6bc6-m59cc\" (UID: \"b4948709-692e-4ce2-b84a-55a87412856d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.521615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bmps\" (UniqueName: \"kubernetes.io/projected/84627667-4128-47e5-a611-c650633e8362-kube-api-access-9bmps\") pod \"kube-storage-version-migrator-operator-b67b599dd-6ds48\" (UID: \"84627667-4128-47e5-a611-c650633e8362\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.541481 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ds6x\" (UniqueName: \"kubernetes.io/projected/84562f70-3466-4537-9761-33e3abcaacb9-kube-api-access-5ds6x\") pod \"machine-config-controller-84d6567774-25vxv\" (UID: \"84562f70-3466-4537-9761-33e3abcaacb9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.550281 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.562201 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxgsm\" (UniqueName: \"kubernetes.io/projected/07036c39-40f5-4969-afd0-1003c1eae037-kube-api-access-sxgsm\") pod \"console-operator-58897d9998-fqdjl\" (UID: \"07036c39-40f5-4969-afd0-1003c1eae037\") " pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.563466 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.576760 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a73ee03_bb76_478c_bcd1_2d08f0e6f538.slice/crio-c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a WatchSource:0}: Error finding container c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a: Status 404 returned error can't find the container with id c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.583429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.603795 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.622725 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.622869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-sqm9s"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.623389 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.638798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.643207 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.654167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.659090 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n78q8"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.668328 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.673207 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.685305 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5"] Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.698201 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc43a59b1_306c_4a0e_9f9f_fad2e9082d55.slice/crio-6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df WatchSource:0}: Error finding container 6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df: Status 404 returned error can't find the container with id 6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.698996 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.705073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.712158 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52fa7608_a369_4813_8a4d_3e2f8b84c885.slice/crio-983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72 WatchSource:0}: Error finding container 983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72: Status 404 returned error can't find the container with id 983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72 Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.714566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.731934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.738858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.744467 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88dbdf9_f0d5_44e2_91c8_6bcc8a6e3713.slice/crio-1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80 WatchSource:0}: Error finding container 1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80: Status 404 returned error can't find the container with id 1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80 Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751125 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkr5\" (UniqueName: \"kubernetes.io/projected/9b2cc162-65ce-48dc-a49f-522d020772bd-kube-api-access-kxkr5\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751233 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-images\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2cc162-65ce-48dc-a49f-522d020772bd-proxy-tls\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751419 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-config\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751570 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffd4b935-0435-4a73-a7cd-596856c63f84-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-service-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751641 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751708 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkdk\" (UniqueName: \"kubernetes.io/projected/8d076be7-905d-48ba-a63c-1c87999890ba-kube-api-access-dmkdk\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbcls\" (UniqueName: \"kubernetes.io/projected/ffd4b935-0435-4a73-a7cd-596856c63f84-kube-api-access-hbcls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751787 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751831 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751850 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751883 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751924 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vjp\" (UniqueName: \"kubernetes.io/projected/ed2152ce-68ce-43a9-87fc-b55b6f46e093-kube-api-access-g9vjp\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcf6796a-5a97-465e-927e-eaf313fcec05-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.751966 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752004 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752040 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-serving-cert\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-service-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752161 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72nm\" (UniqueName: \"kubernetes.io/projected/fb09df70-be06-48b6-a41d-16fb110b7c55-kube-api-access-q72nm\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752198 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-config\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752219 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752240 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb09df70-be06-48b6-a41d-16fb110b7c55-serving-cert\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752293 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-srv-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-config\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752436 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752474 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752496 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq96w\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-kube-api-access-vq96w\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752521 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stfg\" (UniqueName: \"kubernetes.io/projected/db4aad67-0ef8-474a-9e92-143738aed5b6-kube-api-access-6stfg\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752543 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95rs\" (UniqueName: \"kubernetes.io/projected/c8e8ae74-3ef7-42df-99f2-1f67c11edf6d-kube-api-access-b95rs\") pod \"downloads-7954f5f757-rtb8n\" (UID: \"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d\") " pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752711 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed2152ce-68ce-43a9-87fc-b55b6f46e093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752769 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-client\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4zb\" (UniqueName: \"kubernetes.io/projected/bcf6796a-5a97-465e-927e-eaf313fcec05-kube-api-access-tf4zb\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752817 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-config\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752833 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.752852 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: E0218 14:01:49.755899 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.255883167 +0000 UTC m=+142.751604089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.756053 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.794658 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.822311 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx"] Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.824076 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853311 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-certs\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853723 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq96w\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-kube-api-access-vq96w\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853744 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6stfg\" (UniqueName: \"kubernetes.io/projected/db4aad67-0ef8-474a-9e92-143738aed5b6-kube-api-access-6stfg\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853766 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853786 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-csi-data-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853809 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853831 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853854 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d27c3dde-4f78-49ec-8cc2-39c588d91f56-tmpfs\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvzk\" (UniqueName: \"kubernetes.io/projected/d27c3dde-4f78-49ec-8cc2-39c588d91f56-kube-api-access-mgvzk\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5873f31d-7486-489d-866f-9442195a86bf-metrics-tls\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.853972 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-mountpoint-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854019 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95rs\" (UniqueName: \"kubernetes.io/projected/c8e8ae74-3ef7-42df-99f2-1f67c11edf6d-kube-api-access-b95rs\") pod \"downloads-7954f5f757-rtb8n\" (UID: \"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d\") " pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854082 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed2152ce-68ce-43a9-87fc-b55b6f46e093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-config\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-client\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854185 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4zb\" (UniqueName: \"kubernetes.io/projected/bcf6796a-5a97-465e-927e-eaf313fcec05-kube-api-access-tf4zb\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854207 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-plugins-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz4jv\" (UniqueName: \"kubernetes.io/projected/45eb000e-b333-47b8-9cb5-d383ca0628dd-kube-api-access-dz4jv\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854300 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854345 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkr5\" (UniqueName: \"kubernetes.io/projected/9b2cc162-65ce-48dc-a49f-522d020772bd-kube-api-access-kxkr5\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-metrics-tls\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854489 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-webhook-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854580 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-images\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2cc162-65ce-48dc-a49f-522d020772bd-proxy-tls\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854667 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-bound-sa-token\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-node-bootstrap-token\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854761 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-config\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854789 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854813 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854869 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb6d5402-0976-4291-b4ee-5c481fd8df72-cert\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854916 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-socket-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854939 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-cabundle\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854960 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e774d72-bc18-4fab-b988-c36f581d7560-metrics-tls\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.854984 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k7pd\" (UniqueName: \"kubernetes.io/projected/bb6d5402-0976-4291-b4ee-5c481fd8df72-kube-api-access-8k7pd\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffd4b935-0435-4a73-a7cd-596856c63f84-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855048 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-service-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.855117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.856060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-config\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.856106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.856782 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-config\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.857144 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.857805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.857895 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-images\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858089 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-service-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858355 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9b2cc162-65ce-48dc-a49f-522d020772bd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: E0218 14:01:49.858417 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.35839707 +0000 UTC m=+142.854117992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5873f31d-7486-489d-866f-9442195a86bf-config-volume\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858708 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbcls\" (UniqueName: \"kubernetes.io/projected/ffd4b935-0435-4a73-a7cd-596856c63f84-kube-api-access-hbcls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkdk\" (UniqueName: \"kubernetes.io/projected/8d076be7-905d-48ba-a63c-1c87999890ba-kube-api-access-dmkdk\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.858998 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e774d72-bc18-4fab-b988-c36f581d7560-trusted-ca\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859180 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgllc\" (UniqueName: \"kubernetes.io/projected/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-kube-api-access-mgllc\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859276 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.859298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860297 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vjp\" (UniqueName: \"kubernetes.io/projected/ed2152ce-68ce-43a9-87fc-b55b6f46e093-kube-api-access-g9vjp\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860319 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcf6796a-5a97-465e-927e-eaf313fcec05-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-profile-collector-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860372 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860404 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860496 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5455\" (UniqueName: \"kubernetes.io/projected/5873f31d-7486-489d-866f-9442195a86bf-kube-api-access-l5455\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-registration-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860568 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-key\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860709 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6n9\" (UniqueName: \"kubernetes.io/projected/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-kube-api-access-jx6n9\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860728 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860749 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-serving-cert\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860779 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rkm\" (UniqueName: \"kubernetes.io/projected/db115d76-8ccf-4c6b-8b1f-f507ad381c95-kube-api-access-f4rkm\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72nm\" (UniqueName: \"kubernetes.io/projected/fb09df70-be06-48b6-a41d-16fb110b7c55-kube-api-access-q72nm\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860830 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-client\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860834 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed2152ce-68ce-43a9-87fc-b55b6f46e093-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.860431 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861286 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861599 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-service-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861669 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trndz\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-kube-api-access-trndz\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4sq7\" (UniqueName: \"kubernetes.io/projected/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-kube-api-access-k4sq7\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861835 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-config\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861894 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.861912 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb09df70-be06-48b6-a41d-16fb110b7c55-serving-cert\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862078 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-service-ca-bundle\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-srv-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862651 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-config\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.862800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb09df70-be06-48b6-a41d-16fb110b7c55-config\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.863048 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.863541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.864154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ffd4b935-0435-4a73-a7cd-596856c63f84-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.864591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9b2cc162-65ce-48dc-a49f-522d020772bd-proxy-tls\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.864960 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d076be7-905d-48ba-a63c-1c87999890ba-serving-cert\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.865249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.865326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.865656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db4aad67-0ef8-474a-9e92-143738aed5b6-srv-cert\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.866530 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb09df70-be06-48b6-a41d-16fb110b7c55-serving-cert\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.866887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bcf6796a-5a97-465e-927e-eaf313fcec05-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.867017 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.867513 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.894790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-config\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.894966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.895493 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.895844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.898588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901240 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901310 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.902083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d076be7-905d-48ba-a63c-1c87999890ba-etcd-ca\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.902494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.902567 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.903131 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.903226 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.901660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.919139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.919159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.925697 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"oauth-openshift-558db77b4-64j2j\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.930144 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkr5\" (UniqueName: \"kubernetes.io/projected/9b2cc162-65ce-48dc-a49f-522d020772bd-kube-api-access-kxkr5\") pod \"machine-config-operator-74547568cd-9knp6\" (UID: \"9b2cc162-65ce-48dc-a49f-522d020772bd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.940704 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95rs\" (UniqueName: \"kubernetes.io/projected/c8e8ae74-3ef7-42df-99f2-1f67c11edf6d-kube-api-access-b95rs\") pod \"downloads-7954f5f757-rtb8n\" (UID: \"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d\") " pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:49 crc kubenswrapper[4739]: W0218 14:01:49.959370 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6cef9b9_56ee_4d0a_8c13_651e3f649a0e.slice/crio-93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033 WatchSource:0}: Error finding container 93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033: Status 404 returned error can't find the container with id 93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033 Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963831 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-metrics-tls\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-webhook-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963898 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963936 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-bound-sa-token\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-node-bootstrap-token\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.963986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb6d5402-0976-4291-b4ee-5c481fd8df72-cert\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-socket-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-cabundle\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e774d72-bc18-4fab-b988-c36f581d7560-metrics-tls\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k7pd\" (UniqueName: \"kubernetes.io/projected/bb6d5402-0976-4291-b4ee-5c481fd8df72-kube-api-access-8k7pd\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5873f31d-7486-489d-866f-9442195a86bf-config-volume\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e774d72-bc18-4fab-b988-c36f581d7560-trusted-ca\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964160 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgllc\" (UniqueName: \"kubernetes.io/projected/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-kube-api-access-mgllc\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5455\" (UniqueName: \"kubernetes.io/projected/5873f31d-7486-489d-866f-9442195a86bf-kube-api-access-l5455\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964274 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-registration-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964298 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964322 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-key\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx6n9\" (UniqueName: \"kubernetes.io/projected/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-kube-api-access-jx6n9\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rkm\" (UniqueName: \"kubernetes.io/projected/db115d76-8ccf-4c6b-8b1f-f507ad381c95-kube-api-access-f4rkm\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964416 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trndz\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-kube-api-access-trndz\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4sq7\" (UniqueName: \"kubernetes.io/projected/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-kube-api-access-k4sq7\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964477 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-certs\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-csi-data-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d27c3dde-4f78-49ec-8cc2-39c588d91f56-tmpfs\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvzk\" (UniqueName: \"kubernetes.io/projected/d27c3dde-4f78-49ec-8cc2-39c588d91f56-kube-api-access-mgvzk\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-mountpoint-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5873f31d-7486-489d-866f-9442195a86bf-metrics-tls\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964698 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-plugins-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964758 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz4jv\" (UniqueName: \"kubernetes.io/projected/45eb000e-b333-47b8-9cb5-d383ca0628dd-kube-api-access-dz4jv\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.965683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-socket-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.965750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-mountpoint-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: E0218 14:01:49.965983 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.465968614 +0000 UTC m=+142.961689536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.966083 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d27c3dde-4f78-49ec-8cc2-39c588d91f56-tmpfs\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.966279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-plugins-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.966757 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-registration-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.967408 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.967490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db115d76-8ccf-4c6b-8b1f-f507ad381c95-csi-data-dir\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.968278 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5873f31d-7486-489d-866f-9442195a86bf-config-volume\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.968360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e774d72-bc18-4fab-b988-c36f581d7560-trusted-ca\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.964020 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.969019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-cabundle\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.971140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4e774d72-bc18-4fab-b988-c36f581d7560-metrics-tls\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.972518 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb6d5402-0976-4291-b4ee-5c481fd8df72-cert\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.973685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/45eb000e-b333-47b8-9cb5-d383ca0628dd-signing-key\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.974777 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-apiservice-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.974959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.975751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d27c3dde-4f78-49ec-8cc2-39c588d91f56-webhook-cert\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.976088 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-certs\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5873f31d-7486-489d-866f-9442195a86bf-metrics-tls\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4zb\" (UniqueName: \"kubernetes.io/projected/bcf6796a-5a97-465e-927e-eaf313fcec05-kube-api-access-tf4zb\") pod \"multus-admission-controller-857f4d67dd-zzrbt\" (UID: \"bcf6796a-5a97-465e-927e-eaf313fcec05\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-metrics-tls\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.977891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.980258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-node-bootstrap-token\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:49 crc kubenswrapper[4739]: I0218 14:01:49.999703 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.012219 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.021545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq96w\" (UniqueName: \"kubernetes.io/projected/b8d6ecdf-345d-463d-b7d4-d4cc930e38e2-kube-api-access-vq96w\") pod \"cluster-image-registry-operator-dc59b4c8b-lmzh5\" (UID: \"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.043881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t6ncg\" (UID: \"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.045075 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.064437 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stfg\" (UniqueName: \"kubernetes.io/projected/db4aad67-0ef8-474a-9e92-143738aed5b6-kube-api-access-6stfg\") pod \"catalog-operator-68c6474976-kmtx7\" (UID: \"db4aad67-0ef8-474a-9e92-143738aed5b6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.066474 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.066564 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.566542417 +0000 UTC m=+143.062263339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.067027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.068280 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.568270062 +0000 UTC m=+143.063990984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.079186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkdk\" (UniqueName: \"kubernetes.io/projected/8d076be7-905d-48ba-a63c-1c87999890ba-kube-api-access-dmkdk\") pod \"etcd-operator-b45778765-b2m46\" (UID: \"8d076be7-905d-48ba-a63c-1c87999890ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.095242 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.100028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.130060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerStarted","Data":"6a94ba1746bb9046411621744c6bc575cf53f8390a14bb5b831460a72bde647b"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.132575 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerID="00784d510eb0a7114170d2f3527c5738b72eabb6feec6367c4900c0af18aeb52" exitCode=0 Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.132749 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerDied","Data":"00784d510eb0a7114170d2f3527c5738b72eabb6feec6367c4900c0af18aeb52"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.132785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerStarted","Data":"c9bb7b5da63b37ef6c871e86f33af4d9df9ded3b05196e2a8e89b2f887a04f2a"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.137013 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" event={"ID":"3440ceb6-cf9c-4732-bafb-8a58d419276a","Type":"ContainerStarted","Data":"9c4a15b6d2187e9d750901a73c94c3cb04a444f5d12747c31f942fb283c997a5"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.139956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerStarted","Data":"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.139983 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerStarted","Data":"6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.143789 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" event={"ID":"d41d7405-9b25-414a-a247-1d945df68f89","Type":"ContainerStarted","Data":"f11807ff00d70727eedd73b8cfc97f26df2ef13d4d075612357c262e9f7e3a7b"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.143818 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" event={"ID":"d41d7405-9b25-414a-a247-1d945df68f89","Type":"ContainerStarted","Data":"7f5f1179086b0a7de906fc48820274d8fe29f9e5fa08346a3858a1510789c397"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.144544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerStarted","Data":"93cf01cf23eb4d9e3b80af8345d0f7d0393165cae409b67d5aab1659268d8033"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.145413 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" event={"ID":"52fa7608-a369-4813-8a4d-3e2f8b84c885","Type":"ContainerStarted","Data":"88f333bff0ef6dbf7f88e6a1ea8d79ef8fdf9114af426c2cffe30e5eddc12780"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.145432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" event={"ID":"52fa7608-a369-4813-8a4d-3e2f8b84c885","Type":"ContainerStarted","Data":"983d47fc6c49dd2c8fec728306c499f2e20948ad1e714f521cd59f425752df72"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.146230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" event={"ID":"537a1340-9cce-4d5b-9cff-35d934fc4d71","Type":"ContainerStarted","Data":"c2572353e5e0c823eeff3b1e32bc342cd7bbc8ae2f7590fb64fad5e83246bea1"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.147364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" event={"ID":"9d038913-f9eb-40ed-89a8-4687734573aa","Type":"ContainerStarted","Data":"e5bd52a0075af14b489c4570d608174633afa7b7a881b0dd3fcc09f4d546742f"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.147380 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" event={"ID":"9d038913-f9eb-40ed-89a8-4687734573aa","Type":"ContainerStarted","Data":"a8bc596c47e78bec4371bb8a6e511c0017cd3d84224a0fed49e43a4fd604f54f"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.148434 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerStarted","Data":"8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.148463 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerStarted","Data":"fae6dc1b6a99284726a5c316e9b142133b64b76e06f03661a6baf4b3e9620752"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.149112 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.150227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerStarted","Data":"1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.167709 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.168039 4739 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-hkhdz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.168066 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.170124 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.670094288 +0000 UTC m=+143.165815220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.173656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vjp\" (UniqueName: \"kubernetes.io/projected/ed2152ce-68ce-43a9-87fc-b55b6f46e093-kube-api-access-g9vjp\") pod \"cluster-samples-operator-665b6dd947-mknxc\" (UID: \"ed2152ce-68ce-43a9-87fc-b55b6f46e093\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.186196 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1af3a272-dd2c-446d-9ac3-7a2c380c34c8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqzr8\" (UID: \"1af3a272-dd2c-446d-9ac3-7a2c380c34c8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.188225 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" event={"ID":"7a738e9a-0692-4476-b9ba-930e3bdc34d2","Type":"ContainerStarted","Data":"9798d65b85ae4e13b6c002346401f6bb1ef68d24b1a06667dcac8951b39cc2a0"} Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.193914 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"console-f9d7485db-r2dqq\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.195963 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-x9ffr\" (UID: \"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.209122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.217421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.220598 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72nm\" (UniqueName: \"kubernetes.io/projected/fb09df70-be06-48b6-a41d-16fb110b7c55-kube-api-access-q72nm\") pod \"authentication-operator-69f744f599-9zgsz\" (UID: \"fb09df70-be06-48b6-a41d-16fb110b7c55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.223212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbcls\" (UniqueName: \"kubernetes.io/projected/ffd4b935-0435-4a73-a7cd-596856c63f84-kube-api-access-hbcls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pgswj\" (UID: \"ffd4b935-0435-4a73-a7cd-596856c63f84\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.226592 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.230877 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.246162 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.259846 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.268195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz4jv\" (UniqueName: \"kubernetes.io/projected/45eb000e-b333-47b8-9cb5-d383ca0628dd-kube-api-access-dz4jv\") pod \"service-ca-9c57cc56f-67w4c\" (UID: \"45eb000e-b333-47b8-9cb5-d383ca0628dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.273863 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.276375 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.276523 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.277293 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.77726593 +0000 UTC m=+143.272986852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.280340 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx6n9\" (UniqueName: \"kubernetes.io/projected/34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac-kube-api-access-jx6n9\") pod \"package-server-manager-789f6589d5-qfljx\" (UID: \"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.287191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.292910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.306328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"collect-profiles-29523720-vljqj\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.324288 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.324684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-bound-sa-token\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.339779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4sq7\" (UniqueName: \"kubernetes.io/projected/99012b96-1a3e-48ae-ac97-55ab91c6eb6f-kube-api-access-k4sq7\") pod \"dns-operator-744455d44c-mxwhp\" (UID: \"99012b96-1a3e-48ae-ac97-55ab91c6eb6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.366953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rkm\" (UniqueName: \"kubernetes.io/projected/db115d76-8ccf-4c6b-8b1f-f507ad381c95-kube-api-access-f4rkm\") pod \"csi-hostpathplugin-q8t8f\" (UID: \"db115d76-8ccf-4c6b-8b1f-f507ad381c95\") " pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.369135 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.377665 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.378090 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.87805973 +0000 UTC m=+143.373780652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.379329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trndz\" (UniqueName: \"kubernetes.io/projected/4e774d72-bc18-4fab-b988-c36f581d7560-kube-api-access-trndz\") pod \"ingress-operator-5b745b69d9-464cg\" (UID: \"4e774d72-bc18-4fab-b988-c36f581d7560\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.399344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.404682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvzk\" (UniqueName: \"kubernetes.io/projected/d27c3dde-4f78-49ec-8cc2-39c588d91f56-kube-api-access-mgvzk\") pod \"packageserver-d55dfcdfc-k8g5m\" (UID: \"d27c3dde-4f78-49ec-8cc2-39c588d91f56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.409114 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.415271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.416911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5455\" (UniqueName: \"kubernetes.io/projected/5873f31d-7486-489d-866f-9442195a86bf-kube-api-access-l5455\") pod \"dns-default-8lgk6\" (UID: \"5873f31d-7486-489d-866f-9442195a86bf\") " pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.423134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.430587 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.446173 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgllc\" (UniqueName: \"kubernetes.io/projected/d004f5dd-a97b-4707-be47-cd5a9bb69c8a-kube-api-access-mgllc\") pod \"machine-config-server-fjgwd\" (UID: \"d004f5dd-a97b-4707-be47-cd5a9bb69c8a\") " pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.463420 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k7pd\" (UniqueName: \"kubernetes.io/projected/bb6d5402-0976-4291-b4ee-5c481fd8df72-kube-api-access-8k7pd\") pod \"ingress-canary-fbnbw\" (UID: \"bb6d5402-0976-4291-b4ee-5c481fd8df72\") " pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.481366 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.481948 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:50.981936918 +0000 UTC m=+143.477657840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.482713 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.489768 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.533726 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.541064 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zzrbt"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.566696 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fqdjl"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.583512 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.586737 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.086554785 +0000 UTC m=+143.582275717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.586919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.587303 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.087296134 +0000 UTC m=+143.583017056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.589772 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.634824 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.634864 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.638297 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc"] Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.683567 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.688501 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.688772 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.18872059 +0000 UTC m=+143.684441512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.690173 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.691749 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.191733057 +0000 UTC m=+143.687453979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.739326 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fbnbw" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.747648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fjgwd" Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.791327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.791714 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.291696225 +0000 UTC m=+143.787417157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.892347 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.892631 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.392620158 +0000 UTC m=+143.888341080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.993729 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.994017 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.493996931 +0000 UTC m=+143.989717853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:50 crc kubenswrapper[4739]: I0218 14:01:50.995412 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:50 crc kubenswrapper[4739]: E0218 14:01:50.995845 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.495831548 +0000 UTC m=+143.991552460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.096560 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.096915 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.596901414 +0000 UTC m=+144.092622336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.200226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.200807 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.700787333 +0000 UTC m=+144.196508305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.223894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" event={"ID":"3440ceb6-cf9c-4732-bafb-8a58d419276a","Type":"ContainerStarted","Data":"709aa7985ff2276c020f17fb6d2e08776b5e1af00e2c9079bc5748e13ce979f3"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.230709 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerStarted","Data":"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.237400 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.236088 4739 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lbspb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.237503 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.242365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" event={"ID":"537a1340-9cce-4d5b-9cff-35d934fc4d71","Type":"ContainerStarted","Data":"c580c90df571a7fb6bc9806588bd15723f39f8d7a44c9d061e736010db9ea57e"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.250239 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" event={"ID":"bcf6796a-5a97-465e-927e-eaf313fcec05","Type":"ContainerStarted","Data":"76ca13646a6f555b7a58af73d7be351c098394951382d84b9d889dde606395a3"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.251115 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.281944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fjgwd" event={"ID":"d004f5dd-a97b-4707-be47-cd5a9bb69c8a","Type":"ContainerStarted","Data":"97859bcc46030d2f87ef58bc80363d7933bc87323cf44110994346065252628b"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.289081 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9zgsz"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.300371 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerStarted","Data":"9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.300843 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.302315 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.310409 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.802465165 +0000 UTC m=+144.298186087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.310635 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.310931 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.810919972 +0000 UTC m=+144.306640894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.324094 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" event={"ID":"9d038913-f9eb-40ed-89a8-4687734573aa","Type":"ContainerStarted","Data":"32ef41f0b4a7925cabbc8250513df4d06f7d5d181f6b27d2803e7483e7b4cf75"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.336471 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerStarted","Data":"2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.336509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerStarted","Data":"0d679bf97ccb59f87700e07f0a788d9cfe9d2202bf473ab59b61116ca4b4adee"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.337218 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.343859 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" event={"ID":"52fa7608-a369-4813-8a4d-3e2f8b84c885","Type":"ContainerStarted","Data":"5ede99a099f422ae08e5df96fa4980d3f1ba68a9678cd69c1a2957615f10e256"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" event={"ID":"84562f70-3466-4537-9761-33e3abcaacb9","Type":"ContainerStarted","Data":"847dde122b625d1f909bbda96fe9090a22f609392abfc78c26ae880a24885532"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" event={"ID":"84562f70-3466-4537-9761-33e3abcaacb9","Type":"ContainerStarted","Data":"ddb6ee868e6b584bf7a8a889579a1f94d4dcfeb1dbd2bba5c51811607eab1333"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350806 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.350873 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.352301 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a738e9a-0692-4476-b9ba-930e3bdc34d2" containerID="603b086f5a20b396ca79d4fcf433b144e7214077cbc50414486f96674e7ab8c4" exitCode=0 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.352356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" event={"ID":"7a738e9a-0692-4476-b9ba-930e3bdc34d2","Type":"ContainerDied","Data":"603b086f5a20b396ca79d4fcf433b144e7214077cbc50414486f96674e7ab8c4"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.365960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" event={"ID":"9b2cc162-65ce-48dc-a49f-522d020772bd","Type":"ContainerStarted","Data":"489dd3026a2ab974817b6a4a3b7a46f35344594129a95c2983c87659ebaee3df"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.367027 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" event={"ID":"9c1d88a8-7aa9-413f-81cc-5a4852b2691b","Type":"ContainerStarted","Data":"15d73e6bd39405a7a3ff8fe8df861177449cae8d826eea2924592223c7683055"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.369183 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.373098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" event={"ID":"d41d7405-9b25-414a-a247-1d945df68f89","Type":"ContainerStarted","Data":"c2b9ad86542f62b7253ed535eed8d5364f60faa03da19a9f47f405687aeda261"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.375876 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" event={"ID":"84627667-4128-47e5-a611-c650633e8362","Type":"ContainerStarted","Data":"9790ec144703857b9df6a328709790c3dbab5582dc3a53c477c5e2e2ad431e6c"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.375925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" event={"ID":"84627667-4128-47e5-a611-c650633e8362","Type":"ContainerStarted","Data":"7237d1072de04816ebd6193a3f49122a2c11b6abbee558bca1fe65ebd887a9f3"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390457 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerStarted","Data":"c4ae30c0d54d4ef219b473e3da57997fe4557e4d0c833df91259e05007b1b050"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390505 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390852 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.390916 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.391287 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.391324 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.403261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" event={"ID":"b4948709-692e-4ce2-b84a-55a87412856d","Type":"ContainerStarted","Data":"acbe2563d63342e05403b9dd1af03a77b36b37ebdca0810dd6223ac25e4c6b37"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.408888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerStarted","Data":"3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.411685 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.412741 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:51.912727007 +0000 UTC m=+144.408447919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.418856 4739 generic.go:334] "Generic (PLEG): container finished" podID="86f15b94-810d-4448-a663-fd8862f0e601" containerID="0293fd784194161be55c4a69f6d5bfe73ee070c34ef9ca3ab5c650f69fc6e283" exitCode=0 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.419102 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerDied","Data":"0293fd784194161be55c4a69f6d5bfe73ee070c34ef9ca3ab5c650f69fc6e283"} Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.421866 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.423061 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-c4w7p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.423106 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.482041 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rtb8n"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.486622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.516264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.522675 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.022661651 +0000 UTC m=+144.518382653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.551977 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8"] Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.602381 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd69695_49d3_46a8_9981_b592c44e827e.slice/crio-521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a WatchSource:0}: Error finding container 521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a: Status 404 returned error can't find the container with id 521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.618627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.621294 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.121189842 +0000 UTC m=+144.616910764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.639384 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.649189 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.650204 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-67w4c"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.661537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-464cg"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.669339 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.671671 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.678669 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-b2m46"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.694326 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.712201 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.719604 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:51 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:51 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:51 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.719658 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.720027 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.720304 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.220292878 +0000 UTC m=+144.716013800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.744651 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45eb000e_b333_47b8_9cb5_d383ca0628dd.slice/crio-3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a WatchSource:0}: Error finding container 3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a: Status 404 returned error can't find the container with id 3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.755620 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d076be7_905d_48ba_a63c_1c87999890ba.slice/crio-09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd WatchSource:0}: Error finding container 09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd: Status 404 returned error can't find the container with id 09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.824858 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.825484 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.32546872 +0000 UTC m=+144.821189642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.847500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.850500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.850541 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-q8t8f"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.861973 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8lgk6"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.898661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fbnbw"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.900566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.902834 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mxwhp"] Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.902879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx"] Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.907840 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd27c3dde_4f78_49ec_8cc2_39c588d91f56.slice/crio-9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5 WatchSource:0}: Error finding container 9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5: Status 404 returned error can't find the container with id 9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.926453 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:51 crc kubenswrapper[4739]: E0218 14:01:51.926789 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.426777692 +0000 UTC m=+144.922498614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:51 crc kubenswrapper[4739]: W0218 14:01:51.941148 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5873f31d_7486_489d_866f_9442195a86bf.slice/crio-b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17 WatchSource:0}: Error finding container b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17: Status 404 returned error can't find the container with id b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17 Feb 18 14:01:51 crc kubenswrapper[4739]: I0218 14:01:51.953825 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wbqrx" podStartSLOduration=118.953807806 podStartE2EDuration="1m58.953807806s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:51.953783956 +0000 UTC m=+144.449504878" watchObservedRunningTime="2026-02-18 14:01:51.953807806 +0000 UTC m=+144.449528728" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.004561 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5cdhr" podStartSLOduration=118.00454415 podStartE2EDuration="1m58.00454415s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.003350689 +0000 UTC m=+144.499071611" watchObservedRunningTime="2026-02-18 14:01:52.00454415 +0000 UTC m=+144.500265072" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.028307 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.028516 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.528490435 +0000 UTC m=+145.024211357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.028548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.029416 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.529403258 +0000 UTC m=+145.025124180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.039398 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" podStartSLOduration=118.039377744 podStartE2EDuration="1m58.039377744s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.037082306 +0000 UTC m=+144.532803238" watchObservedRunningTime="2026-02-18 14:01:52.039377744 +0000 UTC m=+144.535098656" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.085985 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podStartSLOduration=118.085966041 podStartE2EDuration="1m58.085966041s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.078379386 +0000 UTC m=+144.574100308" watchObservedRunningTime="2026-02-18 14:01:52.085966041 +0000 UTC m=+144.581686963" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.129281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.130018 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.629996402 +0000 UTC m=+145.125717324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.204670 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" podStartSLOduration=118.20465647 podStartE2EDuration="1m58.20465647s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.161898462 +0000 UTC m=+144.657619384" watchObservedRunningTime="2026-02-18 14:01:52.20465647 +0000 UTC m=+144.700377392" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.231450 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" podStartSLOduration=118.231424868 podStartE2EDuration="1m58.231424868s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.205708247 +0000 UTC m=+144.701429169" watchObservedRunningTime="2026-02-18 14:01:52.231424868 +0000 UTC m=+144.727145790" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.231583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.231949 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podStartSLOduration=118.231945661 podStartE2EDuration="1m58.231945661s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.230835573 +0000 UTC m=+144.726556495" watchObservedRunningTime="2026-02-18 14:01:52.231945661 +0000 UTC m=+144.727666583" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.231984 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.731968882 +0000 UTC m=+145.227689804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.315521 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4lvb5" podStartSLOduration=118.315501808 podStartE2EDuration="1m58.315501808s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.315039646 +0000 UTC m=+144.810760578" watchObservedRunningTime="2026-02-18 14:01:52.315501808 +0000 UTC m=+144.811222730" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.316549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6ds48" podStartSLOduration=118.316539824 podStartE2EDuration="1m58.316539824s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.279421941 +0000 UTC m=+144.775142873" watchObservedRunningTime="2026-02-18 14:01:52.316539824 +0000 UTC m=+144.812260766" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.332189 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.336397 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.832425212 +0000 UTC m=+145.328146144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.336649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.337070 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.837054091 +0000 UTC m=+145.332775013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.397037 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podStartSLOduration=118.397021392 podStartE2EDuration="1m58.397021392s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.352351824 +0000 UTC m=+144.848072766" watchObservedRunningTime="2026-02-18 14:01:52.397021392 +0000 UTC m=+144.892742314" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.431924 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" event={"ID":"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da","Type":"ContainerStarted","Data":"a9dc6a5a76b00c50706e5e8be6140d9f1bbc9b5ab63de7523b7e90764fa60739"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.433270 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerStarted","Data":"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.433294 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerStarted","Data":"521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.436142 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"780fce724889e3cb5d1d13acc16e16877db685558fd22ae9b142dc12aea4188c"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.436762 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podStartSLOduration=118.436751802 podStartE2EDuration="1m58.436751802s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.435210013 +0000 UTC m=+144.930930935" watchObservedRunningTime="2026-02-18 14:01:52.436751802 +0000 UTC m=+144.932472724" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.438078 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.438395 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:52.938384664 +0000 UTC m=+145.434105586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.443583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rtb8n" event={"ID":"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d","Type":"ContainerStarted","Data":"76a79069d52c8f8cf823038205c3af57a6bc33e4cafbfb519dad10e4bb7c590b"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.443621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rtb8n" event={"ID":"c8e8ae74-3ef7-42df-99f2-1f67c11edf6d","Type":"ContainerStarted","Data":"bc0a14a3686e361498dc238b3050070dd4c8dcf0b3d9dd6f2ff6ffcab89ad1ac"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.444461 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.445772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerStarted","Data":"24416838c3485f5f59f847cbabc4eb0faac583f47943bdc172447667af33c1a4"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.448016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" event={"ID":"7a738e9a-0692-4476-b9ba-930e3bdc34d2","Type":"ContainerStarted","Data":"72bd5bb0249ac0bfae0cd92c5ca1379c10de09155ffd4a1cd5651649a5f7f819"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.451016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fbnbw" event={"ID":"bb6d5402-0976-4291-b4ee-5c481fd8df72","Type":"ContainerStarted","Data":"6da9def75b21ba4d5b8be0d36f038b06d324a82056ff5ab96ebeafb36d90715b"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" event={"ID":"1af3a272-dd2c-446d-9ac3-7a2c380c34c8","Type":"ContainerStarted","Data":"4b6b8cd6c72ad920711875a76ba3d43d536145610437b3064ac664b8e7e6e7a9"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455388 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" event={"ID":"1af3a272-dd2c-446d-9ac3-7a2c380c34c8","Type":"ContainerStarted","Data":"f0a115ccfb7a2db55613a41b98d76463498f836890263e847766929f500d65b4"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.455431 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.459145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerStarted","Data":"71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.459861 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.459903 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.460548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8lgk6" event={"ID":"5873f31d-7486-489d-866f-9442195a86bf","Type":"ContainerStarted","Data":"b7cb747ed0dbd51bf35e523b522fd40d2795e9cd42d0a8a569fa478847385e17"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.464140 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" event={"ID":"84562f70-3466-4537-9761-33e3abcaacb9","Type":"ContainerStarted","Data":"6f339289c07bbc22f01501f11fa2db49998435cdbbd47b4616fcca0ec4213610"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.467020 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" event={"ID":"ffd4b935-0435-4a73-a7cd-596856c63f84","Type":"ContainerStarted","Data":"b918f952fcd24aa6e35d78a9dae641db97a6f91032b8ac283e78c8b1d09bb523"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.467049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" event={"ID":"ffd4b935-0435-4a73-a7cd-596856c63f84","Type":"ContainerStarted","Data":"43c19ceb1da81f8e21da41ae36950d98049c8c27b28b1c652b8c5936c46fcc24"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.468239 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" event={"ID":"4e774d72-bc18-4fab-b988-c36f581d7560","Type":"ContainerStarted","Data":"b8e43f9cb25a4bf00bf6a5f4f07101efbaa32a2c7f2003d1beda2735f37d5e23"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.470683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerStarted","Data":"2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.470711 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerStarted","Data":"39ed9908fc06adc6beaf03f5a0f7a7f9cb74f347fecb397c807b3e8019f3cdd9"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.471020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.473555 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-64j2j container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.473588 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.474183 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-sqm9s" podStartSLOduration=118.474158183 podStartE2EDuration="1m58.474158183s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.469886553 +0000 UTC m=+144.965607475" watchObservedRunningTime="2026-02-18 14:01:52.474158183 +0000 UTC m=+144.969879115" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.474593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" event={"ID":"45eb000e-b333-47b8-9cb5-d383ca0628dd","Type":"ContainerStarted","Data":"dac7852d0d18f8e9f0f185b76bd1542a5beda377626a6f61607a0797c0fdf1d4"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.474633 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" event={"ID":"45eb000e-b333-47b8-9cb5-d383ca0628dd","Type":"ContainerStarted","Data":"3d4418f13c27b323d612b53c204b3546e3d725840a94d14a749b5016f9be2e3a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.480460 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" event={"ID":"8d076be7-905d-48ba-a63c-1c87999890ba","Type":"ContainerStarted","Data":"09468a62e8e5dca94fb7ce0971d5ace725777d3b55d8d158d439a61da1d529bd"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.482940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" event={"ID":"ed2152ce-68ce-43a9-87fc-b55b6f46e093","Type":"ContainerStarted","Data":"ebd678427637d2d33b7e2608fe1da8a385d7e4a9549ca54971083d9ff0db99f6"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.483924 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" event={"ID":"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be","Type":"ContainerStarted","Data":"e25f2428cec7a470befa13bd19f47f8ffcb8c05d35ac6f33704688d06921be9a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.490487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerStarted","Data":"9bdd0417bd4499953f72c03f906c291fb87de4ec1d2e25e679b2a7a1fe3920c5"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.491520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerStarted","Data":"f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.491545 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerStarted","Data":"2ab479b392cde15a02159889aef023d8858fc9ecbff4659d1f2680779fd37752"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.493230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" event={"ID":"9b2cc162-65ce-48dc-a49f-522d020772bd","Type":"ContainerStarted","Data":"4787e51956ce23d3ac2da1265db039a01e4c24c45a372c3dfbceb153d7f0cb94"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.493252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" event={"ID":"9b2cc162-65ce-48dc-a49f-522d020772bd","Type":"ContainerStarted","Data":"bc0ca75a411d408a5eef0a9be021e6ec6ddfe044ce205158e181cec58a1cb55a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.513532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" event={"ID":"99012b96-1a3e-48ae-ac97-55ab91c6eb6f","Type":"ContainerStarted","Data":"71b6e14b271f5585f0284dd41e5c3e8015f2d67a74c41b5d18b9e94e373567cf"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.516537 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerStarted","Data":"521a422bc1cfb9e5f3bf56987c01fdfaac33848c737e116e528a48af944e975a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.518318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" event={"ID":"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2","Type":"ContainerStarted","Data":"43a442238ea052e2c884c898a38ebff90ea3de18bedf387b1ec96c36fc6e942a"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.518337 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" event={"ID":"b8d6ecdf-345d-463d-b7d4-d4cc930e38e2","Type":"ContainerStarted","Data":"21dd952a4adff37ba4d7fe8578ee2fe8fc346eac47f87176e72cafe003f400d5"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.518360 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-25vxv" podStartSLOduration=118.51835053799999 podStartE2EDuration="1m58.518350538s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.516681416 +0000 UTC m=+145.012402338" watchObservedRunningTime="2026-02-18 14:01:52.518350538 +0000 UTC m=+145.014071450" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.533921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" event={"ID":"9c1d88a8-7aa9-413f-81cc-5a4852b2691b","Type":"ContainerStarted","Data":"b9cc6ff5892682dda1a1d2876a6134b5a1006ba7276bacb929c849056d68e891"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.535296 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.535347 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.537930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-m59cc" event={"ID":"b4948709-692e-4ce2-b84a-55a87412856d","Type":"ContainerStarted","Data":"a98b964a7a70ab8a57d716a80b47087dc112e5b88b69c67664d9c32ea469d7fe"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.549304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" event={"ID":"bcf6796a-5a97-465e-927e-eaf313fcec05","Type":"ContainerStarted","Data":"2c4a7eb068be71106726e9e23aea90de36bfb4b6a0e0bded3667395897654c3d"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.549339 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" event={"ID":"bcf6796a-5a97-465e-927e-eaf313fcec05","Type":"ContainerStarted","Data":"28c99292842f2224376e77794a5bb086114fc269ef0fe2189cb969191993b23d"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.550847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fjgwd" event={"ID":"d004f5dd-a97b-4707-be47-cd5a9bb69c8a","Type":"ContainerStarted","Data":"f8ecfd184fdb8ef0b9ae52832ef376e99e1faa089e999d350bcddbc1f4b5ee38"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.551287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.551543 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.051531761 +0000 UTC m=+145.547252683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.562661 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.580248 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.562341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" event={"ID":"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac","Type":"ContainerStarted","Data":"e7ea5d3f2ab9c1840a87fa286e1d46b0d0c23b0e0bfb037d47385cbe78e55901"} Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.582489 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.582602 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.591229 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" podStartSLOduration=118.5912119 podStartE2EDuration="1m58.5912119s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.590111252 +0000 UTC m=+145.085832184" watchObservedRunningTime="2026-02-18 14:01:52.5912119 +0000 UTC m=+145.086932822" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.591330 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zwjnk" podStartSLOduration=118.591324073 podStartE2EDuration="1m58.591324073s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.558609893 +0000 UTC m=+145.054330815" watchObservedRunningTime="2026-02-18 14:01:52.591324073 +0000 UTC m=+145.087044995" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.634788 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tz66n" podStartSLOduration=119.634773969 podStartE2EDuration="1m59.634773969s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.634348358 +0000 UTC m=+145.130069290" watchObservedRunningTime="2026-02-18 14:01:52.634773969 +0000 UTC m=+145.130494881" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.666912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.667101 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.167073729 +0000 UTC m=+145.662794651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.667601 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.670779 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.170768164 +0000 UTC m=+145.666489086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.676525 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-67w4c" podStartSLOduration=118.676510341 podStartE2EDuration="1m58.676510341s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.67608348 +0000 UTC m=+145.171804422" watchObservedRunningTime="2026-02-18 14:01:52.676510341 +0000 UTC m=+145.172231263" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.711643 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:52 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:52 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:52 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.711719 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.733140 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zzrbt" podStartSLOduration=118.733126396 podStartE2EDuration="1m58.733126396s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.731409831 +0000 UTC m=+145.227130763" watchObservedRunningTime="2026-02-18 14:01:52.733126396 +0000 UTC m=+145.228847318" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.773049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.773471 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.273433061 +0000 UTC m=+145.769153993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.815378 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqzr8" podStartSLOduration=118.815357878 podStartE2EDuration="1m58.815357878s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.809711783 +0000 UTC m=+145.305432715" watchObservedRunningTime="2026-02-18 14:01:52.815357878 +0000 UTC m=+145.311078800" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.867005 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podStartSLOduration=119.866990424 podStartE2EDuration="1m59.866990424s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.865594728 +0000 UTC m=+145.361315660" watchObservedRunningTime="2026-02-18 14:01:52.866990424 +0000 UTC m=+145.362711346" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.881676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.881961 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.381949929 +0000 UTC m=+145.877670851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.894864 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podStartSLOduration=119.89484787 podStartE2EDuration="1m59.89484787s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.894555132 +0000 UTC m=+145.390276074" watchObservedRunningTime="2026-02-18 14:01:52.89484787 +0000 UTC m=+145.390568792" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.925881 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rtb8n" podStartSLOduration=118.925861666 podStartE2EDuration="1m58.925861666s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.923764713 +0000 UTC m=+145.419485655" watchObservedRunningTime="2026-02-18 14:01:52.925861666 +0000 UTC m=+145.421582588" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.957105 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fjgwd" podStartSLOduration=5.957085469 podStartE2EDuration="5.957085469s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.956244997 +0000 UTC m=+145.451965919" watchObservedRunningTime="2026-02-18 14:01:52.957085469 +0000 UTC m=+145.452806391" Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.970664 4739 csr.go:261] certificate signing request csr-l66mr is approved, waiting to be issued Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.982102 4739 csr.go:257] certificate signing request csr-l66mr is issued Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.983231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.983401 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.483383034 +0000 UTC m=+145.979103956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:52 crc kubenswrapper[4739]: I0218 14:01:52.983517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:52 crc kubenswrapper[4739]: E0218 14:01:52.984786 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.48477247 +0000 UTC m=+145.980493392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:52.998148 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-r2dqq" podStartSLOduration=118.998132523 podStartE2EDuration="1m58.998132523s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:52.997411444 +0000 UTC m=+145.493132376" watchObservedRunningTime="2026-02-18 14:01:52.998132523 +0000 UTC m=+145.493853445" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.034920 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9knp6" podStartSLOduration=119.034906168 podStartE2EDuration="1m59.034906168s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.032758072 +0000 UTC m=+145.528478994" watchObservedRunningTime="2026-02-18 14:01:53.034906168 +0000 UTC m=+145.530627090" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.084784 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.084981 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.584943543 +0000 UTC m=+146.080664465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.085248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.085622 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.58560813 +0000 UTC m=+146.081329042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.119101 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" podStartSLOduration=119.11908445 podStartE2EDuration="1m59.11908445s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.1167482 +0000 UTC m=+145.612469122" watchObservedRunningTime="2026-02-18 14:01:53.11908445 +0000 UTC m=+145.614805372" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.186048 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.186546 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.686532063 +0000 UTC m=+146.182252975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.288362 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.288663 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.788652856 +0000 UTC m=+146.284373778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.390011 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.390292 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.890276996 +0000 UTC m=+146.385997918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.491149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.491535 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:53.991515427 +0000 UTC m=+146.487236349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.603348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.603561 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.103532054 +0000 UTC m=+146.599252976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.603610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.603948 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.103937265 +0000 UTC m=+146.599658187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.630523 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" event={"ID":"21d45c8f-8166-4a9f-ae5e-5d2c3ec9d6be","Type":"ContainerStarted","Data":"236074c57c1102d1c9abb448d5248efa5d814c7eb4ba1abf09e30f256385de74"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.656037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" event={"ID":"8d076be7-905d-48ba-a63c-1c87999890ba","Type":"ContainerStarted","Data":"5f7d692089627c673ae15d03817b92bbfac4bd1b0f33613f15e2635a67ce9b44"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.658993 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t6ncg" podStartSLOduration=119.658979699 podStartE2EDuration="1m59.658979699s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.65591672 +0000 UTC m=+146.151637642" watchObservedRunningTime="2026-02-18 14:01:53.658979699 +0000 UTC m=+146.154700621" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.661618 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" event={"ID":"ed2152ce-68ce-43a9-87fc-b55b6f46e093","Type":"ContainerStarted","Data":"085475ec07d7fa9f0df964f771f0a29197ed45b320566c3fca64c98a15993e48"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.676414 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-b2m46" podStartSLOduration=119.676401036 podStartE2EDuration="1m59.676401036s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.675864512 +0000 UTC m=+146.171585434" watchObservedRunningTime="2026-02-18 14:01:53.676401036 +0000 UTC m=+146.172121958" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.684417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" event={"ID":"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac","Type":"ContainerStarted","Data":"436c1454dffbbf07d77daecfc79adac7f01e41c2c69d09ea3732be1117989b0f"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.684522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" event={"ID":"34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac","Type":"ContainerStarted","Data":"b581b1967dfe3011fb0142ca970c8aa6f293934d819ad7ae90bd7f0f329f20ba"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.684579 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.701921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" event={"ID":"b6bb3e55-b6d8-4415-ad8c-a6892ffaa4da","Type":"ContainerStarted","Data":"9bcdae9d8da576faeba2dbeedcd179768a4de4037989feaa9468135c4583c084"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.705052 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.705202 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.205180746 +0000 UTC m=+146.700901668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.705364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.706496 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.206480479 +0000 UTC m=+146.702201471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.714898 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podStartSLOduration=119.714876205 podStartE2EDuration="1m59.714876205s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.713302504 +0000 UTC m=+146.209023436" watchObservedRunningTime="2026-02-18 14:01:53.714876205 +0000 UTC m=+146.210597127" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.717755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" event={"ID":"99012b96-1a3e-48ae-ac97-55ab91c6eb6f","Type":"ContainerStarted","Data":"89e3a99af5e463d1e2ed508dea4255c9770adaf66fe00de05793d97f6d850de9"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.724609 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:53 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:53 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:53 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.724660 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.748654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" event={"ID":"86f15b94-810d-4448-a663-fd8862f0e601","Type":"ContainerStarted","Data":"e27f64741c6acd6ffe70a9a8036fbba883fba5c130a13afc1987261c072ab5e3"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.775715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerStarted","Data":"74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.788902 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fbnbw" event={"ID":"bb6d5402-0976-4291-b4ee-5c481fd8df72","Type":"ContainerStarted","Data":"11971e4803fd6e1fb0cc9035d716f350af33d4fe02a82656ae665b1515d55e92"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.802402 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-x9ffr" podStartSLOduration=119.802386923 podStartE2EDuration="1m59.802386923s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.748604081 +0000 UTC m=+146.244324993" watchObservedRunningTime="2026-02-18 14:01:53.802386923 +0000 UTC m=+146.298107845" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.802567 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podStartSLOduration=120.802561997 podStartE2EDuration="2m0.802561997s" podCreationTimestamp="2026-02-18 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.793854543 +0000 UTC m=+146.289575465" watchObservedRunningTime="2026-02-18 14:01:53.802561997 +0000 UTC m=+146.298282919" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.808159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerStarted","Data":"d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.808916 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.809697 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.809728 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.809966 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.810934 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.310918392 +0000 UTC m=+146.806639314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.828504 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8lgk6" event={"ID":"5873f31d-7486-489d-866f-9442195a86bf","Type":"ContainerStarted","Data":"2edc3da1889dd5aef1e8d5662a5a7ae98ca072a3efde529c3e5df626c076a934"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.828828 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" podStartSLOduration=113.828808371 podStartE2EDuration="1m53.828808371s" podCreationTimestamp="2026-02-18 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.827955619 +0000 UTC m=+146.323676551" watchObservedRunningTime="2026-02-18 14:01:53.828808371 +0000 UTC m=+146.324529313" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.840901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" event={"ID":"4e774d72-bc18-4fab-b988-c36f581d7560","Type":"ContainerStarted","Data":"e588d0a06b60f78a085a5f6c34deecdfe8576f05850c14066206904d239c0286"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.840953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" event={"ID":"4e774d72-bc18-4fab-b988-c36f581d7560","Type":"ContainerStarted","Data":"e49bf89bbe45503ba9159ab5b3ca7e0e669fcd46c09ef23518e59e78c735c005"} Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.844771 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.844816 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.844986 4739 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-64j2j container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.845027 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.860740 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.877724 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fbnbw" podStartSLOduration=6.877624345 podStartE2EDuration="6.877624345s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.862991809 +0000 UTC m=+146.358712731" watchObservedRunningTime="2026-02-18 14:01:53.877624345 +0000 UTC m=+146.373345267" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.887908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.912688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:53 crc kubenswrapper[4739]: E0218 14:01:53.919854 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.419830249 +0000 UTC m=+146.915551171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.947296 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podStartSLOduration=119.947277644 podStartE2EDuration="1m59.947277644s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.946915585 +0000 UTC m=+146.442636507" watchObservedRunningTime="2026-02-18 14:01:53.947277644 +0000 UTC m=+146.442998576" Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.983592 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 13:56:52 +0000 UTC, rotation deadline is 2027-01-09 11:23:36.29556401 +0000 UTC Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.983629 4739 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7797h21m42.311937732s for next certificate rotation Feb 18 14:01:53 crc kubenswrapper[4739]: I0218 14:01:53.999336 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-464cg" podStartSLOduration=119.999304931 podStartE2EDuration="1m59.999304931s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:53.994785825 +0000 UTC m=+146.490506767" watchObservedRunningTime="2026-02-18 14:01:53.999304931 +0000 UTC m=+146.495025853" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.016136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.016586 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.516572105 +0000 UTC m=+147.012293017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.064007 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lmzh5" podStartSLOduration=120.063988713 podStartE2EDuration="2m0.063988713s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.062536065 +0000 UTC m=+146.558256997" watchObservedRunningTime="2026-02-18 14:01:54.063988713 +0000 UTC m=+146.559709635" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.081742 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.082062 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.085523 4739 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-44mk7 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.085566 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" podUID="7a738e9a-0692-4476-b9ba-930e3bdc34d2" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.086542 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.087018 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.090204 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n78q8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.090265 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podUID="86f15b94-810d-4448-a663-fd8862f0e601" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.118033 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.118387 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.61837138 +0000 UTC m=+147.114092302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.139242 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pgswj" podStartSLOduration=120.139225205 podStartE2EDuration="2m0.139225205s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.08998399 +0000 UTC m=+146.585704922" watchObservedRunningTime="2026-02-18 14:01:54.139225205 +0000 UTC m=+146.634946127" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.219425 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.219547 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.719526138 +0000 UTC m=+147.215247060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.219689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.219988 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.71998118 +0000 UTC m=+147.215702102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.320825 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.320929 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.820909892 +0000 UTC m=+147.316630814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.321125 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.321427 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.821418785 +0000 UTC m=+147.317139707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.421796 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.422003 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.921987589 +0000 UTC m=+147.417708511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.422258 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.422756 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:54.922746138 +0000 UTC m=+147.418467130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.523102 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.523305 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.02327378 +0000 UTC m=+147.518994722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.523452 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.523753 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.023743672 +0000 UTC m=+147.519464664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.625084 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.625259 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.125226929 +0000 UTC m=+147.620947851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.625503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.625801 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.125789703 +0000 UTC m=+147.621510625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.702887 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:54 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:54 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:54 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.702945 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.727109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.727251 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.227230349 +0000 UTC m=+147.722951271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.727369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.727660 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.22765122 +0000 UTC m=+147.723372142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.828599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.828730 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.328704846 +0000 UTC m=+147.824425768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.828859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.829196 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.329188058 +0000 UTC m=+147.824908980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8lgk6" event={"ID":"5873f31d-7486-489d-866f-9442195a86bf","Type":"ContainerStarted","Data":"e4408e01ea6d3ed572094cc716c58a6a0cc397dc7f4837e9f8f0dbaa68c4831b"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851101 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851528 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.851568 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.852306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" event={"ID":"99012b96-1a3e-48ae-ac97-55ab91c6eb6f","Type":"ContainerStarted","Data":"53a1b946ba4020ebc2169fb6292f920459c8cfb91c458a68a9eab9872915bb7a"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.853332 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"4e9aec60e2ded8d58f4b7f571605f78d408914430f9de6c52b1fdf3d3d4230e2"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.854792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" event={"ID":"ed2152ce-68ce-43a9-87fc-b55b6f46e093","Type":"ContainerStarted","Data":"eb4792e25d8fa18f949a84af22e25b6fe8c8cef0f70ec20c26397ec7c08480fa"} Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855309 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855343 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855683 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.855722 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.880948 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8lgk6" podStartSLOduration=7.8809309469999995 podStartE2EDuration="7.880930947s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.88027907 +0000 UTC m=+147.376000002" watchObservedRunningTime="2026-02-18 14:01:54.880930947 +0000 UTC m=+147.376651869" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.930198 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.930353 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.430329076 +0000 UTC m=+147.926049998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.930515 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:54 crc kubenswrapper[4739]: E0218 14:01:54.933893 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.433876027 +0000 UTC m=+147.929597029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.951643 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mxwhp" podStartSLOduration=120.951622543 podStartE2EDuration="2m0.951622543s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.926832846 +0000 UTC m=+147.422553778" watchObservedRunningTime="2026-02-18 14:01:54.951622543 +0000 UTC m=+147.447343465" Feb 18 14:01:54 crc kubenswrapper[4739]: I0218 14:01:54.987021 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mknxc" podStartSLOduration=120.986988272 podStartE2EDuration="2m0.986988272s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:54.986124619 +0000 UTC m=+147.481845541" watchObservedRunningTime="2026-02-18 14:01:54.986988272 +0000 UTC m=+147.482709184" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.031914 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.032054 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.532026789 +0000 UTC m=+148.027747711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.032163 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.032461 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.532436609 +0000 UTC m=+148.028157531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.132912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.133483 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.633467784 +0000 UTC m=+148.129188706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.173221 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.234925 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.235998 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.735987418 +0000 UTC m=+148.231708340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.273669 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.274556 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.280021 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.334809 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336547 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.336594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.336738 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.836724195 +0000 UTC m=+148.332445117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.437955 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.438059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.438117 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.438158 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.439018 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.439283 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:55.93927163 +0000 UTC m=+148.434992552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.439517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.476954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"certified-operators-2ch5b\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.483436 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.484541 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.489710 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.500116 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.539715 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.539907 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.539957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.540046 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.540143 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.040129951 +0000 UTC m=+148.535850873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.599124 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.641876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.642584 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.642863 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.142850989 +0000 UTC m=+148.638571911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.643251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.684122 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.685323 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.691797 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"community-operators-47vjm\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.711369 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:55 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:55 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:55 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.711410 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.720639 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742522 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742741 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.742782 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.742896 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.242881609 +0000 UTC m=+148.738602531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.805826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845209 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.845228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.846064 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.846275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.846369 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.346360167 +0000 UTC m=+148.842081089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.909379 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.910262 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.921994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"ec8c434a941a3aead3ccdc2c7c54080621be7500a89ecbfc3709582eb8f12b43"} Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.945954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.946099 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.446076428 +0000 UTC m=+148.941797350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.946678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:55 crc kubenswrapper[4739]: E0218 14:01:55.946977 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.446961571 +0000 UTC m=+148.942682493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:55 crc kubenswrapper[4739]: I0218 14:01:55.976652 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.009375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"certified-operators-n8kkn\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.021901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.047884 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.048104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.048207 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.048298 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.049112 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.549096495 +0000 UTC m=+149.044817417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.050887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.053535 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.137332 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"community-operators-t5j8b\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.152251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.152663 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.652649035 +0000 UTC m=+149.148369957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.230055 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.255014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.255307 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.255378 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.755349043 +0000 UTC m=+149.251069965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.255541 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.257524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.262538 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.307038 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.356989 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.357074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.357104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.362358 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.862340202 +0000 UTC m=+149.358061124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.370256 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.371565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.373425 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.439707 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.460084 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.460516 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.460982 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.468900 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:56.968873498 +0000 UTC m=+149.464594420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.562382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.563084 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.063072458 +0000 UTC m=+149.558793380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.664113 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.665325 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.165307644 +0000 UTC m=+149.661028566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.707941 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:56 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:56 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:56 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.707990 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.723840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.766477 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.766764 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.26675255 +0000 UTC m=+149.762473472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.872128 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.872491 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.372476286 +0000 UTC m=+149.868197198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.898134 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.946592 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerStarted","Data":"1bb8b1ac920da0708b75374c6eb6ccb11af1b832abba028a06c828609d37f144"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.947647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerStarted","Data":"e5127c0ff7f429af7d0aca6c5c08ea2c05b6bea576e6c38224ce6837bef827fc"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.949100 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"5a62d87be5fc2476bd7663a4bf5cea4de5e2b35ec2e2fe46d8b36981ea800819"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.951084 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerStarted","Data":"59dbe1e3611ef825eb60e8c102d83aabfcf6d0ed72189d4427096a9698a93bb3"} Feb 18 14:01:56 crc kubenswrapper[4739]: I0218 14:01:56.976092 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:56 crc kubenswrapper[4739]: E0218 14:01:56.976463 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.476428316 +0000 UTC m=+149.972149238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.051388 4739 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.077030 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.077252 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.577230506 +0000 UTC m=+150.072951438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.077532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.077864 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.577855612 +0000 UTC m=+150.073576544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.101500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:01:57 crc kubenswrapper[4739]: W0218 14:01:57.130475 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28c2c6dd_c0bb_4e02_8ec9_53b9616e1bf2.slice/crio-81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd WatchSource:0}: Error finding container 81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd: Status 404 returned error can't find the container with id 81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.179040 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.179346 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.679329538 +0000 UTC m=+150.175050460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.280807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.281187 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.781171714 +0000 UTC m=+150.276892636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.381518 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.381727 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.881699417 +0000 UTC m=+150.377420339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.382106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.382539 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.882519698 +0000 UTC m=+150.378240700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.483496 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.483688 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.983653056 +0000 UTC m=+150.479374018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.483960 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.484292 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 14:01:57.984277342 +0000 UTC m=+150.479998324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dqtnr" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.584583 4739 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T14:01:57.051418113Z","Handler":null,"Name":""} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.584947 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: E0218 14:01:57.585335 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 14:01:58.085319737 +0000 UTC m=+150.581040659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.588058 4739 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.588103 4739 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.672011 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.672980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.674843 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.685488 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.686034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.704032 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:57 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:57 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:57 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.704384 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.774065 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.774109 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.787093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.787253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.787372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.814725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dqtnr\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.818293 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889470 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.889532 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.890092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.890642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.910969 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.915367 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"redhat-marketplace-wznkg\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.965122 4739 generic.go:334] "Generic (PLEG): container finished" podID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.965270 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.965320 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerStarted","Data":"81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.967050 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.968004 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8da6c0deaae0a27d49185a7f50e5f502f2ddf6d0698cd86cad40a5e6540e0378"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.968041 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9b0c3fa0f0da5808ea89cc75a0163b69cb17d1b688e3974e71e9747e5134f851"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.968253 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.970972 4739 generic.go:334] "Generic (PLEG): container finished" podID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.971022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.979953 4739 generic.go:334] "Generic (PLEG): container finished" podID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerID="22ab4c4400803a84698f429676267f73d2f72204f8bfd5e8b8c44045eb32a01a" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.980095 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"22ab4c4400803a84698f429676267f73d2f72204f8bfd5e8b8c44045eb32a01a"} Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.989698 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.993910 4739 generic.go:334] "Generic (PLEG): container finished" podID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" exitCode=0 Feb 18 14:01:57 crc kubenswrapper[4739]: I0218 14:01:57.993989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.001572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9a1aa940676c1fe86ed10576072f18f096d597d3a0f3ef9cf86f4973b5b08f8f"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.001604 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cea8c372be0df8247e972ed465c79036115cba0a5d76071a77b952c15a262844"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.003020 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"755128bbca4d4db9e21eeaf033ab801a571edabac2fd8b18b9aba579152986dd"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.003080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e46cb7a7c1d8bb18f756f48deec52b33ec495dfac9df36980e5e58aa5f7d6301"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.027562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" event={"ID":"db115d76-8ccf-4c6b-8b1f-f507ad381c95","Type":"ContainerStarted","Data":"96ae3fcb8a5bdbeca9bba7e6dc545b0aae0b9cd422530b28a73190bdfb3ff8b1"} Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.074171 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.081775 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.093215 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.109821 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.197932 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.199021 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.199073 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.200008 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-q8t8f" podStartSLOduration=11.199989996 podStartE2EDuration="11.199989996s" podCreationTimestamp="2026-02-18 14:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:58.175818935 +0000 UTC m=+150.671539857" watchObservedRunningTime="2026-02-18 14:01:58.199989996 +0000 UTC m=+150.695710918" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.294182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:01:58 crc kubenswrapper[4739]: W0218 14:01:58.298561 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6955631f_9981_47a5_8ecb_8756df4e0256.slice/crio-10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40 WatchSource:0}: Error finding container 10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40: Status 404 returned error can't find the container with id 10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40 Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.300032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.300196 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.301065 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.303501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.303555 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.318059 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"redhat-marketplace-fst2x\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.404523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.427677 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.473463 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.474680 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.476602 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.477112 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.505228 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.505278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.505349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.607907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.607954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.608020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.608648 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.608870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.627405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"redhat-operators-fm56z\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.662956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.671232 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:01:58 crc kubenswrapper[4739]: W0218 14:01:58.672319 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef7eb68_c7a7_448e_bbbc_10798fabc4e6.slice/crio-96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7 WatchSource:0}: Error finding container 96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7: Status 404 returned error can't find the container with id 96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7 Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.672454 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.680494 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.703841 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:58 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:58 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:58 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.704087 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.708882 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.708985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.709007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.810352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.810399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.810431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.811006 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.811157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.833744 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"redhat-operators-ccnsw\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.857039 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.979274 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.980142 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.982995 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.985749 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 14:01:58 crc kubenswrapper[4739]: I0218 14:01:58.990600 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.013648 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.013699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.035548 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.043489 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerID="74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4" exitCode=0 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.043576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerDied","Data":"74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.048204 4739 generic.go:334] "Generic (PLEG): container finished" podID="6955631f-9981-47a5-8ecb-8756df4e0256" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" exitCode=0 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.048247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.048298 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerStarted","Data":"10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.050672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerStarted","Data":"c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.050711 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerStarted","Data":"b96e22f2e4072131e39645eec1bdeb575f2e322af330e9ccff4e59c7655f9d27"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.050817 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.056068 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerID="d9f38a5539526a77e4dfda52eaa55e735ab6abeb3007d8993d086f49fd96f3f0" exitCode=0 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.057170 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"d9f38a5539526a77e4dfda52eaa55e735ab6abeb3007d8993d086f49fd96f3f0"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.057207 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerStarted","Data":"96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7"} Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.097089 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" podStartSLOduration=125.09706732 podStartE2EDuration="2m5.09706732s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:01:59.092831061 +0000 UTC m=+151.588551993" watchObservedRunningTime="2026-02-18 14:01:59.09706732 +0000 UTC m=+151.592788252" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.104175 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.105654 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.108777 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.111128 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-44mk7" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.114633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.114707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.115208 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.149357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.303735 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.307718 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.373043 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.373393 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.604532 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:01:59 crc kubenswrapper[4739]: W0218 14:01:59.630050 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7772552e_1443_4f54_a50c_a73f55863363.slice/crio-b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56 WatchSource:0}: Error finding container b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56: Status 404 returned error can't find the container with id b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56 Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.689890 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 14:01:59 crc kubenswrapper[4739]: W0218 14:01:59.691940 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3069f8d4_4c22_4d3e_8d00_b08abfc1ca7a.slice/crio-7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b WatchSource:0}: Error finding container 7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b: Status 404 returned error can't find the container with id 7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.700024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.702395 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 14:01:59 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 14:01:59 crc kubenswrapper[4739]: [+]process-running ok Feb 18 14:01:59 crc kubenswrapper[4739]: healthz check failed Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.702437 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 14:01:59 crc kubenswrapper[4739]: I0218 14:01:59.830311 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.078461 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7549289-fee3-4211-b340-731ff70593d1" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" exitCode=0 Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.078585 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.079037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerStarted","Data":"ec2d2f157f528c4b55bc8096e827bd5672ec6bdfb957669781807b88427d0279"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.083978 4739 generic.go:334] "Generic (PLEG): container finished" podID="7772552e-1443-4f54-a50c-a73f55863363" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" exitCode=0 Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.084035 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.084059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerStarted","Data":"b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.090786 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a","Type":"ContainerStarted","Data":"7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b"} Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.210977 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.211041 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.210985 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.211344 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.224709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.296544 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.298312 4739 patch_prober.go:28] interesting pod/console-f9d7485db-r2dqq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.298352 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r2dqq" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.299726 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.464334 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.647263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") pod \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.647538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") pod \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.647614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") pod \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\" (UID: \"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0\") " Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.648112 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" (UID: "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.655167 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz" (OuterVolumeSpecName: "kube-api-access-bdqxz") pod "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" (UID: "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0"). InnerVolumeSpecName "kube-api-access-bdqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.672139 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" (UID: "f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.723144 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.726734 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.750118 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.750144 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:00 crc kubenswrapper[4739]: I0218 14:02:00.750154 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdqxz\" (UniqueName: \"kubernetes.io/projected/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0-kube-api-access-bdqxz\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.113893 4739 generic.go:334] "Generic (PLEG): container finished" podID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerID="cdbff388006ce73bd61ca7ba3e30d7b284e66ccc9d3af37c29cecae01a6214aa" exitCode=0 Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.113949 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a","Type":"ContainerDied","Data":"cdbff388006ce73bd61ca7ba3e30d7b284e66ccc9d3af37c29cecae01a6214aa"} Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.124589 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.124584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj" event={"ID":"f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0","Type":"ContainerDied","Data":"24416838c3485f5f59f847cbabc4eb0faac583f47943bdc172447667af33c1a4"} Feb 18 14:02:01 crc kubenswrapper[4739]: I0218 14:02:01.124634 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24416838c3485f5f59f847cbabc4eb0faac583f47943bdc172447667af33c1a4" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.375863 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:02:02 crc kubenswrapper[4739]: E0218 14:02:02.376431 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerName="collect-profiles" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.376462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerName="collect-profiles" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.376592 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" containerName="collect-profiles" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.377039 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.380376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.385684 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.390869 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.499993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.500028 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.534237 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.602204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.602248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.602349 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.658079 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703438 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") pod \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703508 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") pod \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\" (UID: \"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a\") " Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" (UID: "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.703970 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.708438 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" (UID: "3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:02 crc kubenswrapper[4739]: I0218 14:02:02.804914 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.155831 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a","Type":"ContainerDied","Data":"7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b"} Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.155876 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e5dcd03ce7d4fba66e725fc26dac9fb74b05d2a9a05874d0bafc28217a4040b" Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.155935 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 14:02:03 crc kubenswrapper[4739]: I0218 14:02:03.215016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 14:02:03 crc kubenswrapper[4739]: W0218 14:02:03.226989 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod962751cd_ff1a_4e95_8027_aebe728486cd.slice/crio-660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae WatchSource:0}: Error finding container 660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae: Status 404 returned error can't find the container with id 660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae Feb 18 14:02:04 crc kubenswrapper[4739]: I0218 14:02:04.161064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"962751cd-ff1a-4e95-8027-aebe728486cd","Type":"ContainerStarted","Data":"660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae"} Feb 18 14:02:05 crc kubenswrapper[4739]: I0218 14:02:05.188313 4739 generic.go:334] "Generic (PLEG): container finished" podID="962751cd-ff1a-4e95-8027-aebe728486cd" containerID="b8f2018a5a199accc20294c64dc0aa16c4653a6e7a1587e33d27ea34f1e7df2f" exitCode=0 Feb 18 14:02:05 crc kubenswrapper[4739]: I0218 14:02:05.188356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"962751cd-ff1a-4e95-8027-aebe728486cd","Type":"ContainerDied","Data":"b8f2018a5a199accc20294c64dc0aa16c4653a6e7a1587e33d27ea34f1e7df2f"} Feb 18 14:02:05 crc kubenswrapper[4739]: I0218 14:02:05.508804 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8lgk6" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.501887 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.671758 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") pod \"962751cd-ff1a-4e95-8027-aebe728486cd\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.671833 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") pod \"962751cd-ff1a-4e95-8027-aebe728486cd\" (UID: \"962751cd-ff1a-4e95-8027-aebe728486cd\") " Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.671970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "962751cd-ff1a-4e95-8027-aebe728486cd" (UID: "962751cd-ff1a-4e95-8027-aebe728486cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.672138 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962751cd-ff1a-4e95-8027-aebe728486cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.687313 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "962751cd-ff1a-4e95-8027-aebe728486cd" (UID: "962751cd-ff1a-4e95-8027-aebe728486cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:06 crc kubenswrapper[4739]: I0218 14:02:06.773040 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/962751cd-ff1a-4e95-8027-aebe728486cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:07 crc kubenswrapper[4739]: I0218 14:02:07.208311 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"962751cd-ff1a-4e95-8027-aebe728486cd","Type":"ContainerDied","Data":"660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae"} Feb 18 14:02:07 crc kubenswrapper[4739]: I0218 14:02:07.208346 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660633d61b90a4a1e0f7bbbbab980abca38a5f3757dd8227849504b8ff1e2aae" Feb 18 14:02:07 crc kubenswrapper[4739]: I0218 14:02:07.208358 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 14:02:10 crc kubenswrapper[4739]: I0218 14:02:10.214872 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rtb8n" Feb 18 14:02:10 crc kubenswrapper[4739]: I0218 14:02:10.317438 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:10 crc kubenswrapper[4739]: I0218 14:02:10.321886 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:02:16 crc kubenswrapper[4739]: I0218 14:02:16.723267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:02:16 crc kubenswrapper[4739]: I0218 14:02:16.733492 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/151d76ab-14d7-4b0b-a930-785156818a3e-metrics-certs\") pod \"network-metrics-daemon-nhkmm\" (UID: \"151d76ab-14d7-4b0b-a930-785156818a3e\") " pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:02:16 crc kubenswrapper[4739]: I0218 14:02:16.923843 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nhkmm" Feb 18 14:02:17 crc kubenswrapper[4739]: I0218 14:02:17.825372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.568597 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.569249 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qt9pm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ccnsw_openshift-marketplace(7772552e-1443-4f54-a50c-a73f55863363): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.572716 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.618408 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.618585 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gg4gr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-47vjm_openshift-marketplace(a44b0172-9ef1-4181-8380-bfe703bdc50d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.619701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-47vjm" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" Feb 18 14:02:28 crc kubenswrapper[4739]: I0218 14:02:28.690613 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nhkmm"] Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.700618 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.700760 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r242g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n8kkn_openshift-marketplace(7ce55882-0feb-4edb-99df-9df2dcb6e62e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:02:28 crc kubenswrapper[4739]: E0218 14:02:28.701973 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-n8kkn" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.341793 4739 generic.go:334] "Generic (PLEG): container finished" podID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.342381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.344678 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerID="c7007a9b012b9e998d5fc274e2d579ca39008b701f39ad42a1d228cbf01383d0" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.344740 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"c7007a9b012b9e998d5fc274e2d579ca39008b701f39ad42a1d228cbf01383d0"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.346997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" event={"ID":"151d76ab-14d7-4b0b-a930-785156818a3e","Type":"ContainerStarted","Data":"f194c07096de388bc3341863c0856a96bdf670c60f9d57b7eb4f4b94ac43a7d0"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.347051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" event={"ID":"151d76ab-14d7-4b0b-a930-785156818a3e","Type":"ContainerStarted","Data":"03a828a3b77017391b65e8d41e2dccc5854cffa517314ec856ea8317072d18a8"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.347063 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nhkmm" event={"ID":"151d76ab-14d7-4b0b-a930-785156818a3e","Type":"ContainerStarted","Data":"6ee88ab257606ffd317d7e44ee6b70d65dd1f0ac0630eb23bdee3082d8d2ad30"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.351628 4739 generic.go:334] "Generic (PLEG): container finished" podID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.351677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.354423 4739 generic.go:334] "Generic (PLEG): container finished" podID="6955631f-9981-47a5-8ecb-8756df4e0256" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" exitCode=0 Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.354502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c"} Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.366419 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerStarted","Data":"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c"} Feb 18 14:02:29 crc kubenswrapper[4739]: E0218 14:02:29.368193 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" Feb 18 14:02:29 crc kubenswrapper[4739]: E0218 14:02:29.368402 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-n8kkn" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" Feb 18 14:02:29 crc kubenswrapper[4739]: E0218 14:02:29.368421 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-47vjm" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.373026 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.373069 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:02:29 crc kubenswrapper[4739]: I0218 14:02:29.445026 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-nhkmm" podStartSLOduration=155.445003911 podStartE2EDuration="2m35.445003911s" podCreationTimestamp="2026-02-18 13:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:02:29.384843835 +0000 UTC m=+181.880564767" watchObservedRunningTime="2026-02-18 14:02:29.445003911 +0000 UTC m=+181.940724833" Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.376524 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7549289-fee3-4211-b340-731ff70593d1" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" exitCode=0 Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.376592 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c"} Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.387175 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerStarted","Data":"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5"} Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.392574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerStarted","Data":"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8"} Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.421771 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wznkg" podStartSLOduration=2.317480171 podStartE2EDuration="33.42175384s" podCreationTimestamp="2026-02-18 14:01:57 +0000 UTC" firstStartedPulling="2026-02-18 14:01:59.056027366 +0000 UTC m=+151.551748288" lastFinishedPulling="2026-02-18 14:02:30.160301035 +0000 UTC m=+182.656021957" observedRunningTime="2026-02-18 14:02:30.419999135 +0000 UTC m=+182.915720067" watchObservedRunningTime="2026-02-18 14:02:30.42175384 +0000 UTC m=+182.917474762" Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.432524 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" Feb 18 14:02:30 crc kubenswrapper[4739]: I0218 14:02:30.445601 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ch5b" podStartSLOduration=3.333328062 podStartE2EDuration="35.445584403s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:58.015392235 +0000 UTC m=+150.511113157" lastFinishedPulling="2026-02-18 14:02:30.127648576 +0000 UTC m=+182.623369498" observedRunningTime="2026-02-18 14:02:30.441551749 +0000 UTC m=+182.937272671" watchObservedRunningTime="2026-02-18 14:02:30.445584403 +0000 UTC m=+182.941305325" Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.400114 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerStarted","Data":"d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38"} Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.402612 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerStarted","Data":"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6"} Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.406536 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerStarted","Data":"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8"} Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.421752 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fst2x" podStartSLOduration=2.220334294 podStartE2EDuration="33.421733378s" podCreationTimestamp="2026-02-18 14:01:58 +0000 UTC" firstStartedPulling="2026-02-18 14:01:59.063663942 +0000 UTC m=+151.559384864" lastFinishedPulling="2026-02-18 14:02:30.265063026 +0000 UTC m=+182.760783948" observedRunningTime="2026-02-18 14:02:31.420707332 +0000 UTC m=+183.916428254" watchObservedRunningTime="2026-02-18 14:02:31.421733378 +0000 UTC m=+183.917454300" Feb 18 14:02:31 crc kubenswrapper[4739]: I0218 14:02:31.441624 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t5j8b" podStartSLOduration=4.036945127 podStartE2EDuration="36.441608718s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:57.966830548 +0000 UTC m=+150.462551470" lastFinishedPulling="2026-02-18 14:02:30.371494139 +0000 UTC m=+182.867215061" observedRunningTime="2026-02-18 14:02:31.440292735 +0000 UTC m=+183.936013667" watchObservedRunningTime="2026-02-18 14:02:31.441608718 +0000 UTC m=+183.937329640" Feb 18 14:02:35 crc kubenswrapper[4739]: I0218 14:02:35.600872 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:35 crc kubenswrapper[4739]: I0218 14:02:35.601292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.064948 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.080591 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fm56z" podStartSLOduration=7.372635346 podStartE2EDuration="38.080574923s" podCreationTimestamp="2026-02-18 14:01:58 +0000 UTC" firstStartedPulling="2026-02-18 14:02:00.080410281 +0000 UTC m=+152.576131203" lastFinishedPulling="2026-02-18 14:02:30.788349858 +0000 UTC m=+183.284070780" observedRunningTime="2026-02-18 14:02:31.461988742 +0000 UTC m=+183.957709664" watchObservedRunningTime="2026-02-18 14:02:36.080574923 +0000 UTC m=+188.576295845" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.231658 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.231714 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.273539 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.466677 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.472518 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:02:36 crc kubenswrapper[4739]: I0218 14:02:36.472571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:37 crc kubenswrapper[4739]: I0218 14:02:37.990318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:37 crc kubenswrapper[4739]: I0218 14:02:37.990374 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.035247 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.405681 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.406016 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.427574 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.513358 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.516000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.587208 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.671904 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.672544 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t5j8b" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" containerID="cri-o://d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" gracePeriod=2 Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.858109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.858157 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:38 crc kubenswrapper[4739]: I0218 14:02:38.917561 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.046369 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.137083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") pod \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.137268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") pod \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.137300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") pod \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\" (UID: \"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2\") " Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.138175 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities" (OuterVolumeSpecName: "utilities") pod "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" (UID: "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.142849 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz" (OuterVolumeSpecName: "kube-api-access-wccvz") pod "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" (UID: "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2"). InnerVolumeSpecName "kube-api-access-wccvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.205433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" (UID: "28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.238255 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wccvz\" (UniqueName: \"kubernetes.io/projected/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-kube-api-access-wccvz\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.238292 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.238302 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460213 4739 generic.go:334] "Generic (PLEG): container finished" podID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" exitCode=0 Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460281 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t5j8b" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460330 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6"} Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t5j8b" event={"ID":"28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2","Type":"ContainerDied","Data":"81b5fcbef1870c44069bb7dc9291550938515d7a028de25c6b79896e1bc2cecd"} Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.460418 4739 scope.go:117] "RemoveContainer" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.475302 4739 scope.go:117] "RemoveContainer" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.491672 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.493388 4739 scope.go:117] "RemoveContainer" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.494305 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t5j8b"] Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.518650 4739 scope.go:117] "RemoveContainer" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:02:39 crc kubenswrapper[4739]: E0218 14:02:39.519166 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6\": container with ID starting with d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6 not found: ID does not exist" containerID="d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519209 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6"} err="failed to get container status \"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6\": rpc error: code = NotFound desc = could not find container \"d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6\": container with ID starting with d8581820ab79c7d96a6163eedc41c1deab619cb273e354e6a4da23506b6acab6 not found: ID does not exist" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519288 4739 scope.go:117] "RemoveContainer" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" Feb 18 14:02:39 crc kubenswrapper[4739]: E0218 14:02:39.519754 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6\": container with ID starting with 0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6 not found: ID does not exist" containerID="0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519804 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6"} err="failed to get container status \"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6\": rpc error: code = NotFound desc = could not find container \"0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6\": container with ID starting with 0860055875df375252bcfb11d2392a31b59063d708c629e10cdb5217a0d78de6 not found: ID does not exist" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.519831 4739 scope.go:117] "RemoveContainer" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" Feb 18 14:02:39 crc kubenswrapper[4739]: E0218 14:02:39.520272 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5\": container with ID starting with 0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5 not found: ID does not exist" containerID="0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5" Feb 18 14:02:39 crc kubenswrapper[4739]: I0218 14:02:39.520315 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5"} err="failed to get container status \"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5\": rpc error: code = NotFound desc = could not find container \"0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5\": container with ID starting with 0a76e46bf994105b5a8e8a327f815ac68db10d09a5df73e4062a197c3fcf75a5 not found: ID does not exist" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.416606 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" path="/var/lib/kubelet/pods/28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2/volumes" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782652 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782913 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-utilities" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782930 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-utilities" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782944 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782972 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.782982 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962751cd-ff1a-4e95-8027-aebe728486cd" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.782990 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="962751cd-ff1a-4e95-8027-aebe728486cd" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: E0218 14:02:40.783005 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-content" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783013 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="extract-content" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783138 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3069f8d4-4c22-4d3e-8d00-b08abfc1ca7a" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783149 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="962751cd-ff1a-4e95-8027-aebe728486cd" containerName="pruner" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783157 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c2c6dd-c0bb-4e02-8ec9-53b9616e1bf2" containerName="registry-server" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.783641 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.787701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.788100 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.796932 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.862400 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.862459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.963178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.963252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.963510 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:40 crc kubenswrapper[4739]: I0218 14:02:40.982028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.072157 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.072732 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fst2x" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" containerID="cri-o://d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38" gracePeriod=2 Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.110780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.335012 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 14:02:41 crc kubenswrapper[4739]: I0218 14:02:41.476974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerStarted","Data":"4594ae73637724cfadec7d9508ed2522518c7095617adf88529667a39028681d"} Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.484815 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerStarted","Data":"e811f61f8fe9da1df5f299f9a0ac13882cde48874dc9a82a271fcbd8e42250e0"} Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.488108 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerID="d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38" exitCode=0 Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.488154 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38"} Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.501392 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.501374198 podStartE2EDuration="2.501374198s" podCreationTimestamp="2026-02-18 14:02:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:02:42.498376481 +0000 UTC m=+194.994097403" watchObservedRunningTime="2026-02-18 14:02:42.501374198 +0000 UTC m=+194.997095120" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.818327 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.887156 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") pod \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.887206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") pod \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.887275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") pod \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\" (UID: \"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6\") " Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.888328 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities" (OuterVolumeSpecName: "utilities") pod "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" (UID: "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.896625 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg" (OuterVolumeSpecName: "kube-api-access-7v8dg") pod "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" (UID: "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6"). InnerVolumeSpecName "kube-api-access-7v8dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.920618 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" (UID: "1ef7eb68-c7a7-448e-bbbc-10798fabc4e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.988295 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.988327 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:42 crc kubenswrapper[4739]: I0218 14:02:42.988337 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v8dg\" (UniqueName: \"kubernetes.io/projected/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6-kube-api-access-7v8dg\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.496222 4739 generic.go:334] "Generic (PLEG): container finished" podID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerID="e811f61f8fe9da1df5f299f9a0ac13882cde48874dc9a82a271fcbd8e42250e0" exitCode=0 Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.496367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerDied","Data":"e811f61f8fe9da1df5f299f9a0ac13882cde48874dc9a82a271fcbd8e42250e0"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.499409 4739 generic.go:334] "Generic (PLEG): container finished" podID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerID="4e07a94ec0847b4e99755ab2a06cb038c67fb9badd5a1660eeebdbdd132f59cc" exitCode=0 Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.499497 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"4e07a94ec0847b4e99755ab2a06cb038c67fb9badd5a1660eeebdbdd132f59cc"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.503418 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerStarted","Data":"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.507832 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fst2x" event={"ID":"1ef7eb68-c7a7-448e-bbbc-10798fabc4e6","Type":"ContainerDied","Data":"96ae9a700ac6737e5625e17caed3c6cbabf21ead3f7cc350e69ee97905a208a7"} Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.507874 4739 scope.go:117] "RemoveContainer" containerID="d8b0e45f2489b814f6c651908ac9de9ccfdd37970f3be25b936a09332a3b1f38" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.507990 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fst2x" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.537660 4739 scope.go:117] "RemoveContainer" containerID="c7007a9b012b9e998d5fc274e2d579ca39008b701f39ad42a1d228cbf01383d0" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.587796 4739 scope.go:117] "RemoveContainer" containerID="d9f38a5539526a77e4dfda52eaa55e735ab6abeb3007d8993d086f49fd96f3f0" Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.590778 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:02:43 crc kubenswrapper[4739]: I0218 14:02:43.596553 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fst2x"] Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.425717 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" path="/var/lib/kubelet/pods/1ef7eb68-c7a7-448e-bbbc-10798fabc4e6/volumes" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.519603 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerStarted","Data":"b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62"} Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.522268 4739 generic.go:334] "Generic (PLEG): container finished" podID="7772552e-1443-4f54-a50c-a73f55863363" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" exitCode=0 Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.522317 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb"} Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.545320 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n8kkn" podStartSLOduration=3.58545342 podStartE2EDuration="49.545255742s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:57.982591263 +0000 UTC m=+150.478312185" lastFinishedPulling="2026-02-18 14:02:43.942393585 +0000 UTC m=+196.438114507" observedRunningTime="2026-02-18 14:02:44.539085413 +0000 UTC m=+197.034806345" watchObservedRunningTime="2026-02-18 14:02:44.545255742 +0000 UTC m=+197.040976664" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.766774 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814330 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") pod \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") pod \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\" (UID: \"bbdd0a7f-2264-4d64-a5a7-1665422dc55e\") " Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bbdd0a7f-2264-4d64-a5a7-1665422dc55e" (UID: "bbdd0a7f-2264-4d64-a5a7-1665422dc55e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.814823 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.829597 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bbdd0a7f-2264-4d64-a5a7-1665422dc55e" (UID: "bbdd0a7f-2264-4d64-a5a7-1665422dc55e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:44 crc kubenswrapper[4739]: I0218 14:02:44.916289 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bbdd0a7f-2264-4d64-a5a7-1665422dc55e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.533992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bbdd0a7f-2264-4d64-a5a7-1665422dc55e","Type":"ContainerDied","Data":"4594ae73637724cfadec7d9508ed2522518c7095617adf88529667a39028681d"} Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.534296 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4594ae73637724cfadec7d9508ed2522518c7095617adf88529667a39028681d" Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.534013 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.536084 4739 generic.go:334] "Generic (PLEG): container finished" podID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" exitCode=0 Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.536150 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10"} Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.538989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerStarted","Data":"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609"} Feb 18 14:02:45 crc kubenswrapper[4739]: I0218 14:02:45.582560 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccnsw" podStartSLOduration=2.769830766 podStartE2EDuration="47.582534665s" podCreationTimestamp="2026-02-18 14:01:58 +0000 UTC" firstStartedPulling="2026-02-18 14:02:00.086065816 +0000 UTC m=+152.581786728" lastFinishedPulling="2026-02-18 14:02:44.898769705 +0000 UTC m=+197.394490627" observedRunningTime="2026-02-18 14:02:45.580830429 +0000 UTC m=+198.076551371" watchObservedRunningTime="2026-02-18 14:02:45.582534665 +0000 UTC m=+198.078255607" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.023336 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.023382 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.077852 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.547577 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerStarted","Data":"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa"} Feb 18 14:02:46 crc kubenswrapper[4739]: I0218 14:02:46.577763 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47vjm" podStartSLOduration=3.632168132 podStartE2EDuration="51.577746492s" podCreationTimestamp="2026-02-18 14:01:55 +0000 UTC" firstStartedPulling="2026-02-18 14:01:57.97548553 +0000 UTC m=+150.471206452" lastFinishedPulling="2026-02-18 14:02:45.92106389 +0000 UTC m=+198.416784812" observedRunningTime="2026-02-18 14:02:46.575206385 +0000 UTC m=+199.070927347" watchObservedRunningTime="2026-02-18 14:02:46.577746492 +0000 UTC m=+199.073467414" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972150 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972815 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972839 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972859 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerName="pruner" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972870 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerName="pruner" Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972888 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-content" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972899 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-content" Feb 18 14:02:47 crc kubenswrapper[4739]: E0218 14:02:47.972923 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-utilities" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.972934 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="extract-utilities" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.973083 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdd0a7f-2264-4d64-a5a7-1665422dc55e" containerName="pruner" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.973113 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef7eb68-c7a7-448e-bbbc-10798fabc4e6" containerName="registry-server" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.973667 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.979782 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 14:02:47 crc kubenswrapper[4739]: I0218 14:02:47.980044 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:47.990087 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.055318 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.055538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.055612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.157538 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.184021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"installer-9-crc\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.323505 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:02:48 crc kubenswrapper[4739]: I0218 14:02:48.640049 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.036022 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.036340 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.569685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerStarted","Data":"37732bee3d0ca90d1c6df703d80575c9d4075b9f00e0d96971f76ccebc6611c8"} Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.569727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerStarted","Data":"a14cae65a1f3403447f1c63df6c06c98c502f096844a01ad5304537c30625604"} Feb 18 14:02:49 crc kubenswrapper[4739]: I0218 14:02:49.586671 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.58665167 podStartE2EDuration="2.58665167s" podCreationTimestamp="2026-02-18 14:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:02:49.58251063 +0000 UTC m=+202.078231552" watchObservedRunningTime="2026-02-18 14:02:49.58665167 +0000 UTC m=+202.082372612" Feb 18 14:02:50 crc kubenswrapper[4739]: I0218 14:02:50.094384 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" probeResult="failure" output=< Feb 18 14:02:50 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:02:50 crc kubenswrapper[4739]: > Feb 18 14:02:55 crc kubenswrapper[4739]: I0218 14:02:55.807335 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:55 crc kubenswrapper[4739]: I0218 14:02:55.808017 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:55 crc kubenswrapper[4739]: I0218 14:02:55.853883 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:56 crc kubenswrapper[4739]: I0218 14:02:56.068022 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:56 crc kubenswrapper[4739]: I0218 14:02:56.721631 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.275904 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.276341 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n8kkn" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" containerID="cri-o://b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62" gracePeriod=2 Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667389 4739 generic.go:334] "Generic (PLEG): container finished" podID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerID="b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62" exitCode=0 Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62"} Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kkn" event={"ID":"7ce55882-0feb-4edb-99df-9df2dcb6e62e","Type":"ContainerDied","Data":"1bb8b1ac920da0708b75374c6eb6ccb11af1b832abba028a06c828609d37f144"} Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.667597 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bb8b1ac920da0708b75374c6eb6ccb11af1b832abba028a06c828609d37f144" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.684843 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.824391 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") pod \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.824867 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") pod \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.825082 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") pod \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\" (UID: \"7ce55882-0feb-4edb-99df-9df2dcb6e62e\") " Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.825825 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities" (OuterVolumeSpecName: "utilities") pod "7ce55882-0feb-4edb-99df-9df2dcb6e62e" (UID: "7ce55882-0feb-4edb-99df-9df2dcb6e62e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.826003 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.829614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g" (OuterVolumeSpecName: "kube-api-access-r242g") pod "7ce55882-0feb-4edb-99df-9df2dcb6e62e" (UID: "7ce55882-0feb-4edb-99df-9df2dcb6e62e"). InnerVolumeSpecName "kube-api-access-r242g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.889331 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ce55882-0feb-4edb-99df-9df2dcb6e62e" (UID: "7ce55882-0feb-4edb-99df-9df2dcb6e62e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.927683 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r242g\" (UniqueName: \"kubernetes.io/projected/7ce55882-0feb-4edb-99df-9df2dcb6e62e-kube-api-access-r242g\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:57 crc kubenswrapper[4739]: I0218 14:02:57.927720 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ce55882-0feb-4edb-99df-9df2dcb6e62e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:02:58 crc kubenswrapper[4739]: I0218 14:02:58.674259 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kkn" Feb 18 14:02:58 crc kubenswrapper[4739]: I0218 14:02:58.700879 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:02:58 crc kubenswrapper[4739]: I0218 14:02:58.704675 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n8kkn"] Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.086436 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.126377 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.372847 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.372922 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.372981 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.373660 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.373759 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4" gracePeriod=600 Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.686604 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4" exitCode=0 Feb 18 14:02:59 crc kubenswrapper[4739]: I0218 14:02:59.686843 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4"} Feb 18 14:03:00 crc kubenswrapper[4739]: I0218 14:03:00.421699 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" path="/var/lib/kubelet/pods/7ce55882-0feb-4edb-99df-9df2dcb6e62e/volumes" Feb 18 14:03:00 crc kubenswrapper[4739]: I0218 14:03:00.698074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6"} Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.280201 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.280613 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccnsw" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" containerID="cri-o://8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" gracePeriod=2 Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.673902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720327 4739 generic.go:334] "Generic (PLEG): container finished" podID="7772552e-1443-4f54-a50c-a73f55863363" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" exitCode=0 Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720381 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609"} Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccnsw" event={"ID":"7772552e-1443-4f54-a50c-a73f55863363","Type":"ContainerDied","Data":"b8cd985c8107733acf822a9680d0b58c3fe410a6ba3b0e24962d1e5b7a41ea56"} Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720497 4739 scope.go:117] "RemoveContainer" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.720634 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccnsw" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.744694 4739 scope.go:117] "RemoveContainer" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.770542 4739 scope.go:117] "RemoveContainer" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.791354 4739 scope.go:117] "RemoveContainer" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" Feb 18 14:03:02 crc kubenswrapper[4739]: E0218 14:03:02.791875 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609\": container with ID starting with 8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609 not found: ID does not exist" containerID="8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.791919 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609"} err="failed to get container status \"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609\": rpc error: code = NotFound desc = could not find container \"8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609\": container with ID starting with 8362ba3c319465a2c6d1e2c4e8e95bf051acb670732fc6116cd0f6604aa01609 not found: ID does not exist" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.791947 4739 scope.go:117] "RemoveContainer" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" Feb 18 14:03:02 crc kubenswrapper[4739]: E0218 14:03:02.792263 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb\": container with ID starting with ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb not found: ID does not exist" containerID="ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.792297 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb"} err="failed to get container status \"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb\": rpc error: code = NotFound desc = could not find container \"ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb\": container with ID starting with ba21121e32133480d6f4a8b7c111f2d6964f80d4bc0d0cbf8f72a44cb17d7fdb not found: ID does not exist" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.792318 4739 scope.go:117] "RemoveContainer" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" Feb 18 14:03:02 crc kubenswrapper[4739]: E0218 14:03:02.792653 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b\": container with ID starting with c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b not found: ID does not exist" containerID="c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.792715 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b"} err="failed to get container status \"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b\": rpc error: code = NotFound desc = could not find container \"c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b\": container with ID starting with c4dacf6a967bd79ba6a5eb88a268ae21fb3c29db76563c7761bb556ccca46a0b not found: ID does not exist" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.795190 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") pod \"7772552e-1443-4f54-a50c-a73f55863363\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.795317 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") pod \"7772552e-1443-4f54-a50c-a73f55863363\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.795398 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") pod \"7772552e-1443-4f54-a50c-a73f55863363\" (UID: \"7772552e-1443-4f54-a50c-a73f55863363\") " Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.797094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities" (OuterVolumeSpecName: "utilities") pod "7772552e-1443-4f54-a50c-a73f55863363" (UID: "7772552e-1443-4f54-a50c-a73f55863363"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.804195 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm" (OuterVolumeSpecName: "kube-api-access-qt9pm") pod "7772552e-1443-4f54-a50c-a73f55863363" (UID: "7772552e-1443-4f54-a50c-a73f55863363"). InnerVolumeSpecName "kube-api-access-qt9pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.897061 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.897120 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt9pm\" (UniqueName: \"kubernetes.io/projected/7772552e-1443-4f54-a50c-a73f55863363-kube-api-access-qt9pm\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.957727 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7772552e-1443-4f54-a50c-a73f55863363" (UID: "7772552e-1443-4f54-a50c-a73f55863363"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:03:02 crc kubenswrapper[4739]: I0218 14:03:02.998801 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7772552e-1443-4f54-a50c-a73f55863363-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.048222 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.050709 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccnsw"] Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.475925 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" containerID="cri-o://2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706" gracePeriod=15 Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.732527 4739 generic.go:334] "Generic (PLEG): container finished" podID="663bc659-8603-490f-9b6e-7ffe14960463" containerID="2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706" exitCode=0 Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.732630 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerDied","Data":"2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706"} Feb 18 14:03:03 crc kubenswrapper[4739]: I0218 14:03:03.886762 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014299 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014377 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014415 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014479 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014563 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014664 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014714 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014771 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014834 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014875 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014929 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.014966 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.015024 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") pod \"663bc659-8603-490f-9b6e-7ffe14960463\" (UID: \"663bc659-8603-490f-9b6e-7ffe14960463\") " Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.015391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.015550 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.016555 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.016756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.016845 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.021061 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j" (OuterVolumeSpecName: "kube-api-access-zq67j") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "kube-api-access-zq67j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.026047 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.028631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.029613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.030005 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.030327 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.030757 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.031088 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.031287 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "663bc659-8603-490f-9b6e-7ffe14960463" (UID: "663bc659-8603-490f-9b6e-7ffe14960463"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116482 4739 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116850 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116871 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116890 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116908 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq67j\" (UniqueName: \"kubernetes.io/projected/663bc659-8603-490f-9b6e-7ffe14960463-kube-api-access-zq67j\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116923 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116940 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116959 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116975 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/663bc659-8603-490f-9b6e-7ffe14960463-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.116989 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117002 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117016 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117033 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.117050 4739 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/663bc659-8603-490f-9b6e-7ffe14960463-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.426098 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7772552e-1443-4f54-a50c-a73f55863363" path="/var/lib/kubelet/pods/7772552e-1443-4f54-a50c-a73f55863363/volumes" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.744004 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" event={"ID":"663bc659-8603-490f-9b6e-7ffe14960463","Type":"ContainerDied","Data":"39ed9908fc06adc6beaf03f5a0f7a7f9cb74f347fecb397c807b3e8019f3cdd9"} Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.744113 4739 scope.go:117] "RemoveContainer" containerID="2091e0b6ec823c2be46cc955f8e1860f25dcbaf76d40f0a02489ec9b087df706" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.744131 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-64j2j" Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.775048 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:03:04 crc kubenswrapper[4739]: I0218 14:03:04.778749 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-64j2j"] Feb 18 14:03:06 crc kubenswrapper[4739]: I0218 14:03:06.417973 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663bc659-8603-490f-9b6e-7ffe14960463" path="/var/lib/kubelet/pods/663bc659-8603-490f-9b6e-7ffe14960463/volumes" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.613579 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-798cf5fb96-6gsw8"] Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614130 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614152 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614173 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614184 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614196 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614206 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614221 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614234 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614256 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614266 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-content" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614289 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614300 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: E0218 14:03:08.614313 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="extract-utilities" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614492 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="663bc659-8603-490f-9b6e-7ffe14960463" containerName="oauth-openshift" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614518 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7772552e-1443-4f54-a50c-a73f55863363" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.614564 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce55882-0feb-4edb-99df-9df2dcb6e62e" containerName="registry-server" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.615981 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.621179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.621614 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.621987 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622011 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622268 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.622765 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.623385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.623556 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.623569 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.624678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.624740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625115 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-error\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625190 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk844\" (UniqueName: \"kubernetes.io/projected/bcd76c5a-1d18-4986-9be4-399139f65c11-kube-api-access-nk844\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-policies\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-session\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-dir\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625528 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-router-certs\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-service-ca\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.624814 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625685 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625789 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.625823 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-login\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.626153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.639210 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.643076 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.646836 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798cf5fb96-6gsw8"] Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.667561 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-error\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk844\" (UniqueName: \"kubernetes.io/projected/bcd76c5a-1d18-4986-9be4-399139f65c11-kube-api-access-nk844\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727588 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-policies\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-session\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727635 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-dir\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-router-certs\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-service-ca\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727737 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727780 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727802 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-login\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727825 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.727912 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.728045 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-dir\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.729281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-service-ca\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.729369 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.730190 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-audit-policies\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.730426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.734354 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-error\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.735854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.736361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-router-certs\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.736571 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-template-login\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.738379 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.738580 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.739513 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-session\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.742014 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bcd76c5a-1d18-4986-9be4-399139f65c11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.757770 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk844\" (UniqueName: \"kubernetes.io/projected/bcd76c5a-1d18-4986-9be4-399139f65c11-kube-api-access-nk844\") pod \"oauth-openshift-798cf5fb96-6gsw8\" (UID: \"bcd76c5a-1d18-4986-9be4-399139f65c11\") " pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:08 crc kubenswrapper[4739]: I0218 14:03:08.959327 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.245795 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-798cf5fb96-6gsw8"] Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.776082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerStarted","Data":"873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4"} Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.776491 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.776510 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerStarted","Data":"d93ce4fea1217ed8b6ec72243e4e8b583cb0fce1aa47f890c4b2eb96721eb3f8"} Feb 18 14:03:09 crc kubenswrapper[4739]: I0218 14:03:09.802323 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podStartSLOduration=31.802296167 podStartE2EDuration="31.802296167s" podCreationTimestamp="2026-02-18 14:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:03:09.801004263 +0000 UTC m=+222.296725245" watchObservedRunningTime="2026-02-18 14:03:09.802296167 +0000 UTC m=+222.298017129" Feb 18 14:03:10 crc kubenswrapper[4739]: I0218 14:03:10.155242 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.605347 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.606997 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607186 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607405 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607546 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607596 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607561 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.607630 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" gracePeriod=15 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612083 4739 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612705 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612747 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612785 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612803 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612833 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612849 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612865 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612880 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612901 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612916 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.612940 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.612955 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613177 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613214 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613232 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613253 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613304 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 14:03:26 crc kubenswrapper[4739]: E0218 14:03:26.613583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.613607 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.738678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739244 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739328 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739362 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739553 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.739793 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841494 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841505 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841410 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841639 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.841761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.883085 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.884813 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885849 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885880 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885890 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885900 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" exitCode=2 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.885986 4739 scope.go:117] "RemoveContainer" containerID="8cfec73408b7a7dab92e617e380e04f1037e4acd0a891a18e9e96bc8bd5387d8" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.889293 4739 generic.go:334] "Generic (PLEG): container finished" podID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerID="37732bee3d0ca90d1c6df703d80575c9d4075b9f00e0d96971f76ccebc6611c8" exitCode=0 Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.889328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerDied","Data":"37732bee3d0ca90d1c6df703d80575c9d4075b9f00e0d96971f76ccebc6611c8"} Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.890105 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:26 crc kubenswrapper[4739]: I0218 14:03:26.890347 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: I0218 14:03:27.897791 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.905986 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.906533 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.906895 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.907269 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.907622 4739 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:27 crc kubenswrapper[4739]: I0218 14:03:27.907657 4739 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 14:03:27 crc kubenswrapper[4739]: E0218 14:03:27.907950 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Feb 18 14:03:28 crc kubenswrapper[4739]: E0218 14:03:28.108694 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.127931 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.128537 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.129009 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.261967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") pod \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262140 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") pod \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262211 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") pod \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\" (UID: \"e440b2ba-20b4-4568-99bc-ffad1f19eafb\") " Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock" (OuterVolumeSpecName: "var-lock") pod "e440b2ba-20b4-4568-99bc-ffad1f19eafb" (UID: "e440b2ba-20b4-4568-99bc-ffad1f19eafb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.262783 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e440b2ba-20b4-4568-99bc-ffad1f19eafb" (UID: "e440b2ba-20b4-4568-99bc-ffad1f19eafb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.267603 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e440b2ba-20b4-4568-99bc-ffad1f19eafb" (UID: "e440b2ba-20b4-4568-99bc-ffad1f19eafb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.363834 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.363877 4739 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.363895 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e440b2ba-20b4-4568-99bc-ffad1f19eafb-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.415557 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.416071 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:28 crc kubenswrapper[4739]: E0218 14:03:28.510271 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.904583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e440b2ba-20b4-4568-99bc-ffad1f19eafb","Type":"ContainerDied","Data":"a14cae65a1f3403447f1c63df6c06c98c502f096844a01ad5304537c30625604"} Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.904896 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a14cae65a1f3403447f1c63df6c06c98c502f096844a01ad5304537c30625604" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.904692 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 14:03:28 crc kubenswrapper[4739]: I0218 14:03:28.909326 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.007737 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.008849 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.009398 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.009767 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.172653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.172750 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.172813 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.173094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.173130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.173150 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.274115 4739 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.274162 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.274174 4739 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.312028 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.754333 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T14:03:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.754853 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.755637 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.756429 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.756871 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: E0218 14:03:29.756904 4739 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.914679 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.915437 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" exitCode=0 Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.915511 4739 scope.go:117] "RemoveContainer" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.915582 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.932606 4739 scope.go:117] "RemoveContainer" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.933348 4739 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.933847 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.949456 4739 scope.go:117] "RemoveContainer" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.965920 4739 scope.go:117] "RemoveContainer" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.980575 4739 scope.go:117] "RemoveContainer" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" Feb 18 14:03:29 crc kubenswrapper[4739]: I0218 14:03:29.999811 4739 scope.go:117] "RemoveContainer" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.028146 4739 scope.go:117] "RemoveContainer" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.029284 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\": container with ID starting with 4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db not found: ID does not exist" containerID="4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.029336 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db"} err="failed to get container status \"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\": rpc error: code = NotFound desc = could not find container \"4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db\": container with ID starting with 4e69e9e434ed53bd4f5d7f7730a902271a70d82ef9d0f3d08df86b398c60f0db not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.029369 4739 scope.go:117] "RemoveContainer" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.029913 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\": container with ID starting with 897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59 not found: ID does not exist" containerID="897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030018 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59"} err="failed to get container status \"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\": rpc error: code = NotFound desc = could not find container \"897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59\": container with ID starting with 897735b7e41d8eebfed3a9d316ddb2bb2fdde15999f0fde9778b9e6c64bf7a59 not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030100 4739 scope.go:117] "RemoveContainer" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.030677 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\": container with ID starting with 132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990 not found: ID does not exist" containerID="132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030771 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990"} err="failed to get container status \"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\": rpc error: code = NotFound desc = could not find container \"132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990\": container with ID starting with 132838d09651225b3a93282e2d983d8f3db9cacfa2c02e2d7ddfd06d98e98990 not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.030849 4739 scope.go:117] "RemoveContainer" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.031208 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\": container with ID starting with c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e not found: ID does not exist" containerID="c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.031245 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e"} err="failed to get container status \"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\": rpc error: code = NotFound desc = could not find container \"c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e\": container with ID starting with c91f331d23829a63a1e7bd127f5d4b4a72a0437b31819fbe92ebef802de59c8e not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.031268 4739 scope.go:117] "RemoveContainer" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.032202 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\": container with ID starting with 6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc not found: ID does not exist" containerID="6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.032312 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc"} err="failed to get container status \"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\": rpc error: code = NotFound desc = could not find container \"6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc\": container with ID starting with 6b6831e4433111c6e6d46f92844fffb858cfdcdaa17124b526c8682c736aa8bc not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.032382 4739 scope.go:117] "RemoveContainer" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.034022 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\": container with ID starting with 6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a not found: ID does not exist" containerID="6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.034092 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a"} err="failed to get container status \"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\": rpc error: code = NotFound desc = could not find container \"6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a\": container with ID starting with 6644e64727b0a475c6f78384cbb35066092be8e7092d620da1f7d884ab0f565a not found: ID does not exist" Feb 18 14:03:30 crc kubenswrapper[4739]: I0218 14:03:30.416923 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 14:03:30 crc kubenswrapper[4739]: E0218 14:03:30.913111 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Feb 18 14:03:31 crc kubenswrapper[4739]: E0218 14:03:31.651204 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:31 crc kubenswrapper[4739]: I0218 14:03:31.652071 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:31 crc kubenswrapper[4739]: W0218 14:03:31.682636 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed WatchSource:0}: Error finding container ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed: Status 404 returned error can't find the container with id ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed Feb 18 14:03:31 crc kubenswrapper[4739]: E0218 14:03:31.686231 4739 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18955c352057568e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 14:03:31.68577499 +0000 UTC m=+244.181495912,LastTimestamp:2026-02-18 14:03:31.68577499 +0000 UTC m=+244.181495912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 14:03:31 crc kubenswrapper[4739]: I0218 14:03:31.938757 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ae747044135e538e48edd8de70571ba679e3a31114f59c0d7ac55f71bf462bed"} Feb 18 14:03:32 crc kubenswrapper[4739]: I0218 14:03:32.944752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf"} Feb 18 14:03:32 crc kubenswrapper[4739]: E0218 14:03:32.945323 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:32 crc kubenswrapper[4739]: I0218 14:03:32.945536 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:33 crc kubenswrapper[4739]: E0218 14:03:33.951150 4739 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:03:34 crc kubenswrapper[4739]: E0218 14:03:34.113647 4739 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.409421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.410473 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.434556 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.434872 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: E0218 14:03:37.435471 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.436095 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: E0218 14:03:37.484200 4739 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" volumeName="registry-storage" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.972796 4739 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1395b979aac8e3d361decbe0ed7edf0aa760b49b8dee8acf52ecff93a1f3beb3" exitCode=0 Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.972867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1395b979aac8e3d361decbe0ed7edf0aa760b49b8dee8acf52ecff93a1f3beb3"} Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.972920 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dcb4253475aa835dd8e8c53b9cb2c47800a6a7067c1d42e34f079c6cff7a10e2"} Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.973206 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.973221 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:37 crc kubenswrapper[4739]: E0218 14:03:37.973612 4739 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:37 crc kubenswrapper[4739]: I0218 14:03:37.973741 4739 status_manager.go:851] "Failed to get status for pod" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.985467 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"62a20bd569adc675b5afe07886789f29380b0a42724bcb48190d65aec0c20952"} Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.986048 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6027817e8a32cf8e088506b9b11537d1b4801406d7ab6b01d19a01b37a69bac6"} Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.986061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9d4aa79db9145f305e0bc340074d31c99dbd9f3e5d3aad01ea2a4455bd4cd201"} Feb 18 14:03:38 crc kubenswrapper[4739]: I0218 14:03:38.986073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e181c6dded8617100a39708c785c0996b5ab79691882170d631958b2cca9c9e"} Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.992901 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.992949 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366" exitCode=1 Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.993002 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366"} Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.993427 4739 scope.go:117] "RemoveContainer" containerID="158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.998728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d6095c05b40e14712a4f26388402ba5fb295b0972a796470a65ef0491aa781a7"} Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.998992 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.999092 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:39 crc kubenswrapper[4739]: I0218 14:03:39.999127 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.006960 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.007348 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd"} Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.648664 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:41 crc kubenswrapper[4739]: I0218 14:03:41.652584 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.012733 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.436682 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.436723 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:42 crc kubenswrapper[4739]: I0218 14:03:42.444709 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.005635 4739 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.027695 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.027722 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.032563 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:45 crc kubenswrapper[4739]: I0218 14:03:45.035950 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0baef8ef-0291-449b-b6a9-b7e8c8eae0ae" Feb 18 14:03:46 crc kubenswrapper[4739]: I0218 14:03:46.032705 4739 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:46 crc kubenswrapper[4739]: I0218 14:03:46.032733 4739 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="7b8aa7a5-f2f3-4dfb-bb7f-4db0b63e1bb0" Feb 18 14:03:48 crc kubenswrapper[4739]: I0218 14:03:48.432794 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0baef8ef-0291-449b-b6a9-b7e8c8eae0ae" Feb 18 14:03:51 crc kubenswrapper[4739]: I0218 14:03:51.343871 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 14:03:51 crc kubenswrapper[4739]: I0218 14:03:51.505317 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 14:03:51 crc kubenswrapper[4739]: I0218 14:03:51.746642 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.084979 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.267271 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.320112 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.355966 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.478109 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.777185 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.838385 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 14:03:52 crc kubenswrapper[4739]: I0218 14:03:52.968281 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.175465 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.298387 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.344179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.570729 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.736229 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.779931 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.802288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.842555 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 14:03:53 crc kubenswrapper[4739]: I0218 14:03:53.968926 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.136553 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.253216 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.349552 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.401161 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.603975 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.813546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 14:03:54 crc kubenswrapper[4739]: I0218 14:03:54.926299 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.267279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.551310 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.944689 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:03:55 crc kubenswrapper[4739]: I0218 14:03:55.979356 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.663180 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.707974 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.770989 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.897177 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 14:03:56 crc kubenswrapper[4739]: I0218 14:03:56.943318 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.006106 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.020179 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.193611 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.288701 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.728736 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 14:03:57 crc kubenswrapper[4739]: I0218 14:03:57.912569 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.401610 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.542168 4739 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.848185 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.913190 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 14:03:58 crc kubenswrapper[4739]: I0218 14:03:58.956860 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.000801 4739 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.008571 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.008647 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.015599 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.041482 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.041460691 podStartE2EDuration="14.041460691s" podCreationTimestamp="2026-02-18 14:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:03:59.037754647 +0000 UTC m=+271.533475639" watchObservedRunningTime="2026-02-18 14:03:59.041460691 +0000 UTC m=+271.537181623" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.083360 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.166078 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.429027 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.622584 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.676186 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.805120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.871026 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.871884 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.936374 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:03:59 crc kubenswrapper[4739]: I0218 14:03:59.980638 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.328399 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.439367 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.557651 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.726511 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.843116 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 14:04:00 crc kubenswrapper[4739]: I0218 14:04:00.993751 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.064878 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.097322 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.219369 4739 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.231972 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.303701 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.327198 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.367664 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.376220 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.617774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.641032 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.685139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.738910 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.753081 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.755952 4739 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.778214 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.813282 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.831314 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.889956 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.897787 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.904799 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.926543 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.950589 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.969916 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 14:04:01 crc kubenswrapper[4739]: I0218 14:04:01.977494 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.099985 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.515370 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.524860 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.593433 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.661855 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.662617 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.682875 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.719810 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.722623 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.741985 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.790547 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.807334 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.887496 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.905062 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 14:04:02 crc kubenswrapper[4739]: I0218 14:04:02.945128 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.074715 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.104288 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.142394 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.201223 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.220842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.235815 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.253411 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.341425 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.376179 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.675784 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.730421 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.772616 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.794034 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.823083 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.847533 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.860024 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 14:04:03 crc kubenswrapper[4739]: I0218 14:04:03.882621 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.004568 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.046437 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.050890 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.111204 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.136852 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.176545 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.448918 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.516973 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.703565 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.705191 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.705846 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.853385 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.877225 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.901917 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.922955 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 14:04:04 crc kubenswrapper[4739]: I0218 14:04:04.975138 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.118669 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.181992 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.271622 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.378714 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.425668 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.483005 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.506733 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.517419 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.531225 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.549132 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.573123 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.690132 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.737562 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.743785 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.864495 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 14:04:05 crc kubenswrapper[4739]: I0218 14:04:05.871429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.072227 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.174643 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.211432 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.220537 4739 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.223809 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.323630 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.583493 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.615704 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.717962 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.773241 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.826669 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 14:04:06 crc kubenswrapper[4739]: I0218 14:04:06.862631 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.017536 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.185592 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.189680 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.242551 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.248310 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.314576 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.325253 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.376438 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.541992 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.570690 4739 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.570897 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" gracePeriod=5 Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.583679 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.641607 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.663789 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.726244 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.745075 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.750347 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.870196 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.873979 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.892033 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.930915 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.978558 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.987594 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:04:07 crc kubenswrapper[4739]: I0218 14:04:07.987847 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.148686 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.152664 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.198161 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.225871 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.303156 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.331339 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.404883 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.434369 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.462460 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.484761 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.516730 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.584892 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.649762 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.705090 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.814549 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.827064 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.892549 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.894376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.906923 4739 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 14:04:08 crc kubenswrapper[4739]: I0218 14:04:08.972196 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.238496 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.278125 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.286702 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.287153 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.357059 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.527112 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.701559 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.760076 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.767967 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.772087 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.783288 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.935812 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.938367 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 14:04:09 crc kubenswrapper[4739]: I0218 14:04:09.961647 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.060039 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.389329 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.411796 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.549428 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.570851 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.742158 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.765731 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.875843 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:04:10 crc kubenswrapper[4739]: I0218 14:04:10.914754 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.055547 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.145223 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.212613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.235795 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.314923 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.519683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.750840 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.754520 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.825482 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 14:04:11 crc kubenswrapper[4739]: I0218 14:04:11.975403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.032006 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.044493 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.097613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.144107 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.336399 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.699649 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.919256 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 14:04:12 crc kubenswrapper[4739]: I0218 14:04:12.951379 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.068357 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.148469 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.148549 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.153776 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212832 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212895 4739 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" exitCode=137 Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212944 4739 scope.go:117] "RemoveContainer" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.212988 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.235900 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.244160 4739 scope.go:117] "RemoveContainer" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" Feb 18 14:04:13 crc kubenswrapper[4739]: E0218 14:04:13.244948 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf\": container with ID starting with 0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf not found: ID does not exist" containerID="0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.244996 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf"} err="failed to get container status \"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf\": rpc error: code = NotFound desc = could not find container \"0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf\": container with ID starting with 0cff30125f34e3e18644697dd954357ebea67aec26861a311fd8fb4e9f1d2bdf not found: ID does not exist" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257253 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257428 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257549 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257612 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257631 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257655 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257679 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.257787 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258083 4739 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258114 4739 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258139 4739 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.258165 4739 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.267678 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.268172 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.359753 4739 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:13 crc kubenswrapper[4739]: I0218 14:04:13.644991 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 14:04:14 crc kubenswrapper[4739]: I0218 14:04:14.417437 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 14:04:15 crc kubenswrapper[4739]: I0218 14:04:15.325648 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.163548 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.164526 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ch5b" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" containerID="cri-o://44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.170589 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.170953 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-47vjm" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" containerID="cri-o://2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.192223 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.192592 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" containerID="cri-o://de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.201977 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.202272 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wznkg" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" containerID="cri-o://1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.208677 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.209099 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fm56z" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" containerID="cri-o://91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" gracePeriod=30 Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.230191 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28vcn"] Feb 18 14:04:17 crc kubenswrapper[4739]: E0218 14:04:17.230681 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerName="installer" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.230778 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerName="installer" Feb 18 14:04:17 crc kubenswrapper[4739]: E0218 14:04:17.230896 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.230974 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.231174 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e440b2ba-20b4-4568-99bc-ffad1f19eafb" containerName="installer" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.231277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.231816 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.243219 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28vcn"] Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.313531 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.313630 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pjnf\" (UniqueName: \"kubernetes.io/projected/0dc6acff-649a-4e95-ba42-ad79dae4a787-kube-api-access-8pjnf\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.313679 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.415671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.415739 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.415787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pjnf\" (UniqueName: \"kubernetes.io/projected/0dc6acff-649a-4e95-ba42-ad79dae4a787-kube-api-access-8pjnf\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.416946 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.420761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0dc6acff-649a-4e95-ba42-ad79dae4a787-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.431254 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pjnf\" (UniqueName: \"kubernetes.io/projected/0dc6acff-649a-4e95-ba42-ad79dae4a787-kube-api-access-8pjnf\") pod \"marketplace-operator-79b997595-28vcn\" (UID: \"0dc6acff-649a-4e95-ba42-ad79dae4a787\") " pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.606136 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.610379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.614297 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.618563 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.623295 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.627664 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723028 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") pod \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723109 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") pod \"a7549289-fee3-4211-b340-731ff70593d1\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723135 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") pod \"a7549289-fee3-4211-b340-731ff70593d1\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723171 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") pod \"692fafe2-8be1-4359-8a74-f8916c8f6d55\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723203 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") pod \"692fafe2-8be1-4359-8a74-f8916c8f6d55\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723225 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") pod \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") pod \"a44b0172-9ef1-4181-8380-bfe703bdc50d\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.723320 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") pod \"6955631f-9981-47a5-8ecb-8756df4e0256\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726152 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") pod \"6955631f-9981-47a5-8ecb-8756df4e0256\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726197 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") pod \"6955631f-9981-47a5-8ecb-8756df4e0256\" (UID: \"6955631f-9981-47a5-8ecb-8756df4e0256\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") pod \"a44b0172-9ef1-4181-8380-bfe703bdc50d\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") pod \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\" (UID: \"c43a59b1-306c-4a0e-9f9f-fad2e9082d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726637 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") pod \"a7549289-fee3-4211-b340-731ff70593d1\" (UID: \"a7549289-fee3-4211-b340-731ff70593d1\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726678 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") pod \"692fafe2-8be1-4359-8a74-f8916c8f6d55\" (UID: \"692fafe2-8be1-4359-8a74-f8916c8f6d55\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.726695 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") pod \"a44b0172-9ef1-4181-8380-bfe703bdc50d\" (UID: \"a44b0172-9ef1-4181-8380-bfe703bdc50d\") " Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727060 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities" (OuterVolumeSpecName: "utilities") pod "6955631f-9981-47a5-8ecb-8756df4e0256" (UID: "6955631f-9981-47a5-8ecb-8756df4e0256"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727148 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727329 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities" (OuterVolumeSpecName: "utilities") pod "a44b0172-9ef1-4181-8380-bfe703bdc50d" (UID: "a44b0172-9ef1-4181-8380-bfe703bdc50d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727397 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities" (OuterVolumeSpecName: "utilities") pod "a7549289-fee3-4211-b340-731ff70593d1" (UID: "a7549289-fee3-4211-b340-731ff70593d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727547 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c43a59b1-306c-4a0e-9f9f-fad2e9082d55" (UID: "c43a59b1-306c-4a0e-9f9f-fad2e9082d55"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727569 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c43a59b1-306c-4a0e-9f9f-fad2e9082d55" (UID: "c43a59b1-306c-4a0e-9f9f-fad2e9082d55"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.727583 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities" (OuterVolumeSpecName: "utilities") pod "692fafe2-8be1-4359-8a74-f8916c8f6d55" (UID: "692fafe2-8be1-4359-8a74-f8916c8f6d55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.728013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq" (OuterVolumeSpecName: "kube-api-access-78txq") pod "692fafe2-8be1-4359-8a74-f8916c8f6d55" (UID: "692fafe2-8be1-4359-8a74-f8916c8f6d55"). InnerVolumeSpecName "kube-api-access-78txq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.729176 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78" (OuterVolumeSpecName: "kube-api-access-hwf78") pod "a7549289-fee3-4211-b340-731ff70593d1" (UID: "a7549289-fee3-4211-b340-731ff70593d1"). InnerVolumeSpecName "kube-api-access-hwf78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.729249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr" (OuterVolumeSpecName: "kube-api-access-gg4gr") pod "a44b0172-9ef1-4181-8380-bfe703bdc50d" (UID: "a44b0172-9ef1-4181-8380-bfe703bdc50d"). InnerVolumeSpecName "kube-api-access-gg4gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.730399 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms" (OuterVolumeSpecName: "kube-api-access-nbcms") pod "6955631f-9981-47a5-8ecb-8756df4e0256" (UID: "6955631f-9981-47a5-8ecb-8756df4e0256"). InnerVolumeSpecName "kube-api-access-nbcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.738320 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j" (OuterVolumeSpecName: "kube-api-access-w2c4j") pod "c43a59b1-306c-4a0e-9f9f-fad2e9082d55" (UID: "c43a59b1-306c-4a0e-9f9f-fad2e9082d55"). InnerVolumeSpecName "kube-api-access-w2c4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.772403 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6955631f-9981-47a5-8ecb-8756df4e0256" (UID: "6955631f-9981-47a5-8ecb-8756df4e0256"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.796847 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a44b0172-9ef1-4181-8380-bfe703bdc50d" (UID: "a44b0172-9ef1-4181-8380-bfe703bdc50d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.803074 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "692fafe2-8be1-4359-8a74-f8916c8f6d55" (UID: "692fafe2-8be1-4359-8a74-f8916c8f6d55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828513 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828551 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828562 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828570 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828578 4739 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828588 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwf78\" (UniqueName: \"kubernetes.io/projected/a7549289-fee3-4211-b340-731ff70593d1-kube-api-access-hwf78\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828599 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78txq\" (UniqueName: \"kubernetes.io/projected/692fafe2-8be1-4359-8a74-f8916c8f6d55-kube-api-access-78txq\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828607 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692fafe2-8be1-4359-8a74-f8916c8f6d55-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828615 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2c4j\" (UniqueName: \"kubernetes.io/projected/c43a59b1-306c-4a0e-9f9f-fad2e9082d55-kube-api-access-w2c4j\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828623 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4gr\" (UniqueName: \"kubernetes.io/projected/a44b0172-9ef1-4181-8380-bfe703bdc50d-kube-api-access-gg4gr\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828631 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6955631f-9981-47a5-8ecb-8756df4e0256-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828641 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbcms\" (UniqueName: \"kubernetes.io/projected/6955631f-9981-47a5-8ecb-8756df4e0256-kube-api-access-nbcms\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.828649 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44b0172-9ef1-4181-8380-bfe703bdc50d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.894713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7549289-fee3-4211-b340-731ff70593d1" (UID: "a7549289-fee3-4211-b340-731ff70593d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:04:17 crc kubenswrapper[4739]: I0218 14:04:17.930092 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7549289-fee3-4211-b340-731ff70593d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.026976 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-28vcn"] Feb 18 14:04:18 crc kubenswrapper[4739]: W0218 14:04:18.034268 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dc6acff_649a_4e95_ba42_ad79dae4a787.slice/crio-e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28 WatchSource:0}: Error finding container e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28: Status 404 returned error can't find the container with id e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.247318 4739 generic.go:334] "Generic (PLEG): container finished" podID="a7549289-fee3-4211-b340-731ff70593d1" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.247462 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.248641 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm56z" event={"ID":"a7549289-fee3-4211-b340-731ff70593d1","Type":"ContainerDied","Data":"ec2d2f157f528c4b55bc8096e827bd5672ec6bdfb957669781807b88427d0279"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.248675 4739 scope.go:117] "RemoveContainer" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.247557 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm56z" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254040 4739 generic.go:334] "Generic (PLEG): container finished" podID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254128 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254153 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ch5b" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.254159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ch5b" event={"ID":"692fafe2-8be1-4359-8a74-f8916c8f6d55","Type":"ContainerDied","Data":"e5127c0ff7f429af7d0aca6c5c08ea2c05b6bea576e6c38224ce6837bef827fc"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.256247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerStarted","Data":"714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.256284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerStarted","Data":"e9eda71e71701aa693e5fd614f905a971422ca7ae5ce554aba886c3e4c9a9f28"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.256794 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.264661 4739 scope.go:117] "RemoveContainer" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.266812 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.266929 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272226 4739 generic.go:334] "Generic (PLEG): container finished" podID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272334 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272286 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerDied","Data":"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.272500 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c4w7p" event={"ID":"c43a59b1-306c-4a0e-9f9f-fad2e9082d55","Type":"ContainerDied","Data":"6ae935e4756c3ac9dd9d42b9a107606b44a96ac470faeaa29302b35c3bb1c8df"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275650 4739 generic.go:334] "Generic (PLEG): container finished" podID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275776 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47vjm" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275784 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.275829 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47vjm" event={"ID":"a44b0172-9ef1-4181-8380-bfe703bdc50d","Type":"ContainerDied","Data":"59dbe1e3611ef825eb60e8c102d83aabfcf6d0ed72189d4427096a9698a93bb3"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293234 4739 generic.go:334] "Generic (PLEG): container finished" podID="6955631f-9981-47a5-8ecb-8756df4e0256" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" exitCode=0 Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293391 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wznkg" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.293403 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wznkg" event={"ID":"6955631f-9981-47a5-8ecb-8756df4e0256","Type":"ContainerDied","Data":"10d8a724d59bd6a5d14617a528e748b2601030ae0dc43e290bc4b95d4dedba40"} Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.304929 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podStartSLOduration=1.304903953 podStartE2EDuration="1.304903953s" podCreationTimestamp="2026-02-18 14:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:04:18.291352651 +0000 UTC m=+290.787073583" watchObservedRunningTime="2026-02-18 14:04:18.304903953 +0000 UTC m=+290.800624895" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.309718 4739 scope.go:117] "RemoveContainer" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.348807 4739 scope.go:117] "RemoveContainer" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.349511 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8\": container with ID starting with 91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8 not found: ID does not exist" containerID="91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.349570 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8"} err="failed to get container status \"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8\": rpc error: code = NotFound desc = could not find container \"91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8\": container with ID starting with 91438e28b50af388b0ccee8af1d1601b61a1b4d8f5be6eec1cf1da08ca7c0ef8 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.349602 4739 scope.go:117] "RemoveContainer" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.350397 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c\": container with ID starting with d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c not found: ID does not exist" containerID="d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.350462 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c"} err="failed to get container status \"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c\": rpc error: code = NotFound desc = could not find container \"d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c\": container with ID starting with d8f6d516155d589e7d1eb7a6eea99d4c413ff9b7a11cd8c67dd3e58c0a1f215c not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.350493 4739 scope.go:117] "RemoveContainer" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.351234 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242\": container with ID starting with 9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242 not found: ID does not exist" containerID="9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.351262 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242"} err="failed to get container status \"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242\": rpc error: code = NotFound desc = could not find container \"9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242\": container with ID starting with 9e47b85d370233a0bf233d7161a2f7316f31cfa5939b2305fca3b59a04f4c242 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.351287 4739 scope.go:117] "RemoveContainer" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.353206 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.357830 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fm56z"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.373013 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.374203 4739 scope.go:117] "RemoveContainer" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.380609 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ch5b"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.387393 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.390699 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-47vjm"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.407642 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.410290 4739 scope.go:117] "RemoveContainer" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.418971 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" path="/var/lib/kubelet/pods/692fafe2-8be1-4359-8a74-f8916c8f6d55/volumes" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.420239 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" path="/var/lib/kubelet/pods/a44b0172-9ef1-4181-8380-bfe703bdc50d/volumes" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.421013 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7549289-fee3-4211-b340-731ff70593d1" path="/var/lib/kubelet/pods/a7549289-fee3-4211-b340-731ff70593d1/volumes" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.422222 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wznkg"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.422252 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.424688 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c4w7p"] Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426105 4739 scope.go:117] "RemoveContainer" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.426518 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5\": container with ID starting with 44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5 not found: ID does not exist" containerID="44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426548 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5"} err="failed to get container status \"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5\": rpc error: code = NotFound desc = could not find container \"44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5\": container with ID starting with 44e5262a77b9c62b9f2a99154b8f98bfd0972444c9a5bf7e7fee5bbfd9dfb3b5 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426566 4739 scope.go:117] "RemoveContainer" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.426809 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662\": container with ID starting with e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662 not found: ID does not exist" containerID="e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426846 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662"} err="failed to get container status \"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662\": rpc error: code = NotFound desc = could not find container \"e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662\": container with ID starting with e02812fba123a1b640a8c7df98da2f8bd68a0b15a0172cda00785537e0d56662 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.426877 4739 scope.go:117] "RemoveContainer" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.427130 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608\": container with ID starting with 4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608 not found: ID does not exist" containerID="4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.427162 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608"} err="failed to get container status \"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608\": rpc error: code = NotFound desc = could not find container \"4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608\": container with ID starting with 4c1b881b59ce09043ae130740ace2bb157df06ba6ab2c9601dc76ee0977e7608 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.427179 4739 scope.go:117] "RemoveContainer" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.445159 4739 scope.go:117] "RemoveContainer" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.445620 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1\": container with ID starting with de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1 not found: ID does not exist" containerID="de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.445648 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1"} err="failed to get container status \"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1\": rpc error: code = NotFound desc = could not find container \"de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1\": container with ID starting with de9f077fc9e7938fe3ac44914b66fb876f9b9080f192541c66c4e09083d2b2e1 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.445672 4739 scope.go:117] "RemoveContainer" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.460419 4739 scope.go:117] "RemoveContainer" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.476253 4739 scope.go:117] "RemoveContainer" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.498335 4739 scope.go:117] "RemoveContainer" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.498903 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa\": container with ID starting with 2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa not found: ID does not exist" containerID="2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.498946 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa"} err="failed to get container status \"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa\": rpc error: code = NotFound desc = could not find container \"2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa\": container with ID starting with 2a072d8e7ee80688d7e6a2bfd00765f65f8b99dd0c2604ab7279e7e11552efaa not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.498968 4739 scope.go:117] "RemoveContainer" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.499600 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10\": container with ID starting with e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10 not found: ID does not exist" containerID="e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.499623 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10"} err="failed to get container status \"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10\": rpc error: code = NotFound desc = could not find container \"e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10\": container with ID starting with e6219fd31904426472b017834034f247e7d9c77251713ad952a69e7b70cd8d10 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.499637 4739 scope.go:117] "RemoveContainer" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.500626 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e\": container with ID starting with 551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e not found: ID does not exist" containerID="551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.500677 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e"} err="failed to get container status \"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e\": rpc error: code = NotFound desc = could not find container \"551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e\": container with ID starting with 551cb4bae6665ae27f7d5b2decaafebe71c83e00b8a73881bb3e336390146e0e not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.500709 4739 scope.go:117] "RemoveContainer" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.512803 4739 scope.go:117] "RemoveContainer" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.530998 4739 scope.go:117] "RemoveContainer" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.547657 4739 scope.go:117] "RemoveContainer" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.548063 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8\": container with ID starting with 1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8 not found: ID does not exist" containerID="1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548094 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8"} err="failed to get container status \"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8\": rpc error: code = NotFound desc = could not find container \"1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8\": container with ID starting with 1182b426099ad4166c36fc240e2310778ef9df157a889781e33e0859af52d5b8 not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548117 4739 scope.go:117] "RemoveContainer" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.548527 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c\": container with ID starting with 8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c not found: ID does not exist" containerID="8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548560 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c"} err="failed to get container status \"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c\": rpc error: code = NotFound desc = could not find container \"8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c\": container with ID starting with 8a4a2cb16b50f7bad58d4da02480e75d7e91e89560e15dff3da7b4be01b7785c not found: ID does not exist" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548578 4739 scope.go:117] "RemoveContainer" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" Feb 18 14:04:18 crc kubenswrapper[4739]: E0218 14:04:18.548817 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64\": container with ID starting with 9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64 not found: ID does not exist" containerID="9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64" Feb 18 14:04:18 crc kubenswrapper[4739]: I0218 14:04:18.548840 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64"} err="failed to get container status \"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64\": rpc error: code = NotFound desc = could not find container \"9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64\": container with ID starting with 9bb3a5841305148839f6ad188df3883061d1654f9985c3ee6dbc318088131f64 not found: ID does not exist" Feb 18 14:04:19 crc kubenswrapper[4739]: I0218 14:04:19.312427 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 14:04:20 crc kubenswrapper[4739]: I0218 14:04:20.416155 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" path="/var/lib/kubelet/pods/6955631f-9981-47a5-8ecb-8756df4e0256/volumes" Feb 18 14:04:20 crc kubenswrapper[4739]: I0218 14:04:20.417794 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" path="/var/lib/kubelet/pods/c43a59b1-306c-4a0e-9f9f-fad2e9082d55/volumes" Feb 18 14:04:28 crc kubenswrapper[4739]: I0218 14:04:28.085033 4739 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.636634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9"] Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637371 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637386 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637402 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637410 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637422 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637430 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637530 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637545 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637555 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637564 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637574 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637581 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637593 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637602 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-utilities" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637612 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637620 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637630 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637637 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637648 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637655 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637667 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637674 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637685 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637692 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: E0218 14:04:48.637705 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637712 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="extract-content" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637815 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44b0172-9ef1-4181-8380-bfe703bdc50d" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637827 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="692fafe2-8be1-4359-8a74-f8916c8f6d55" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637848 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6955631f-9981-47a5-8ecb-8756df4e0256" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637859 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43a59b1-306c-4a0e-9f9f-fad2e9082d55" containerName="marketplace-operator" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.637870 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7549289-fee3-4211-b340-731ff70593d1" containerName="registry-server" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.638263 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.644594 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.650394 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.653867 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.654853 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.659085 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.676562 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9"] Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.707800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/60fdcb8b-f362-4d6b-981a-aad2da285f70-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.708035 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/60fdcb8b-f362-4d6b-981a-aad2da285f70-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.708297 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9jk\" (UniqueName: \"kubernetes.io/projected/60fdcb8b-f362-4d6b-981a-aad2da285f70-kube-api-access-cs9jk\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.810376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9jk\" (UniqueName: \"kubernetes.io/projected/60fdcb8b-f362-4d6b-981a-aad2da285f70-kube-api-access-cs9jk\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.810559 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/60fdcb8b-f362-4d6b-981a-aad2da285f70-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.810608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/60fdcb8b-f362-4d6b-981a-aad2da285f70-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.812365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/60fdcb8b-f362-4d6b-981a-aad2da285f70-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.821080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/60fdcb8b-f362-4d6b-981a-aad2da285f70-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.842108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9jk\" (UniqueName: \"kubernetes.io/projected/60fdcb8b-f362-4d6b-981a-aad2da285f70-kube-api-access-cs9jk\") pod \"cluster-monitoring-operator-6d5b84845-2thj9\" (UID: \"60fdcb8b-f362-4d6b-981a-aad2da285f70\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:48 crc kubenswrapper[4739]: I0218 14:04:48.969956 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" Feb 18 14:04:49 crc kubenswrapper[4739]: I0218 14:04:49.439065 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9"] Feb 18 14:04:50 crc kubenswrapper[4739]: I0218 14:04:50.464156 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" event={"ID":"60fdcb8b-f362-4d6b-981a-aad2da285f70","Type":"ContainerStarted","Data":"ba20ef6e135638d511d72a2468e17e0b85632cac4307e195fb0a85a3620f776b"} Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.470836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" event={"ID":"60fdcb8b-f362-4d6b-981a-aad2da285f70","Type":"ContainerStarted","Data":"2d36deca78ce76e5bc7e10d8272e9998c1dd07ad4a61815e895f09527aca3787"} Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.492847 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-2thj9" podStartSLOduration=1.6443391269999998 podStartE2EDuration="3.492817686s" podCreationTimestamp="2026-02-18 14:04:48 +0000 UTC" firstStartedPulling="2026-02-18 14:04:49.463822189 +0000 UTC m=+321.959543121" lastFinishedPulling="2026-02-18 14:04:51.312300758 +0000 UTC m=+323.808021680" observedRunningTime="2026-02-18 14:04:51.48821351 +0000 UTC m=+323.983934442" watchObservedRunningTime="2026-02-18 14:04:51.492817686 +0000 UTC m=+323.988538628" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.941477 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg"] Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.943287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.946128 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.947740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:51 crc kubenswrapper[4739]: I0218 14:04:51.948275 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg"] Feb 18 14:04:52 crc kubenswrapper[4739]: I0218 14:04:52.049642 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.049801 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.049923 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:52.549895404 +0000 UTC m=+325.045616366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:52 crc kubenswrapper[4739]: I0218 14:04:52.555479 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.555712 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:52 crc kubenswrapper[4739]: E0218 14:04:52.555808 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:53.555780159 +0000 UTC m=+326.051501151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:53 crc kubenswrapper[4739]: I0218 14:04:53.566745 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:53 crc kubenswrapper[4739]: E0218 14:04:53.566935 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:53 crc kubenswrapper[4739]: E0218 14:04:53.567750 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:55.567723468 +0000 UTC m=+328.063444430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:55 crc kubenswrapper[4739]: I0218 14:04:55.592073 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:55 crc kubenswrapper[4739]: E0218 14:04:55.592229 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:55 crc kubenswrapper[4739]: E0218 14:04:55.592483 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:04:59.592467473 +0000 UTC m=+332.088188395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:59 crc kubenswrapper[4739]: I0218 14:04:59.372818 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:04:59 crc kubenswrapper[4739]: I0218 14:04:59.372899 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:04:59 crc kubenswrapper[4739]: I0218 14:04:59.642679 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:04:59 crc kubenswrapper[4739]: E0218 14:04:59.642926 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:04:59 crc kubenswrapper[4739]: E0218 14:04:59.643036 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:05:07.643008728 +0000 UTC m=+340.138729680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:07 crc kubenswrapper[4739]: I0218 14:05:07.698184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:07 crc kubenswrapper[4739]: E0218 14:05:07.698357 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:07 crc kubenswrapper[4739]: E0218 14:05:07.698900 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:05:23.698882076 +0000 UTC m=+356.194602998 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.148046 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.148592 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" containerID="cri-o://2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" gracePeriod=30 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.269255 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.269782 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" containerID="cri-o://8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267" gracePeriod=30 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.487681 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573765 4739 generic.go:334] "Generic (PLEG): container finished" podID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" exitCode=0 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerDied","Data":"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418"} Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573870 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" event={"ID":"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713","Type":"ContainerDied","Data":"1542f2a32767ea611a0dd0201115ccf7f36e2a7c9f28dba16c4caf8e215a8b80"} Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573887 4739 scope.go:117] "RemoveContainer" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.573989 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lbspb" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.575963 4739 generic.go:334] "Generic (PLEG): container finished" podID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerID="8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267" exitCode=0 Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.576003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerDied","Data":"8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267"} Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.593234 4739 scope.go:117] "RemoveContainer" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" Feb 18 14:05:10 crc kubenswrapper[4739]: E0218 14:05:10.595924 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418\": container with ID starting with 2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418 not found: ID does not exist" containerID="2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.596942 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418"} err="failed to get container status \"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418\": rpc error: code = NotFound desc = could not find container \"2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418\": container with ID starting with 2a58f44722648b66e825982aa9116705a2c4f7ef26c3b1ae4ba542b31edd6418 not found: ID does not exist" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.622305 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638426 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638535 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.638659 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") pod \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\" (UID: \"d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.639670 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca" (OuterVolumeSpecName: "client-ca") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.639789 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config" (OuterVolumeSpecName: "config") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.640275 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.646123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.646254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc" (OuterVolumeSpecName: "kube-api-access-drssc") pod "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" (UID: "d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713"). InnerVolumeSpecName "kube-api-access-drssc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.740116 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.740580 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.740831 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741087 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") pod \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\" (UID: \"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39\") " Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741104 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config" (OuterVolumeSpecName: "config") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741501 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca" (OuterVolumeSpecName: "client-ca") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.741935 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.742101 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.742236 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.742884 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743039 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drssc\" (UniqueName: \"kubernetes.io/projected/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-kube-api-access-drssc\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743185 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743371 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.743998 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq" (OuterVolumeSpecName: "kube-api-access-2bdfq") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "kube-api-access-2bdfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.744090 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" (UID: "eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.846161 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bdfq\" (UniqueName: \"kubernetes.io/projected/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-kube-api-access-2bdfq\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.846217 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.914618 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:05:10 crc kubenswrapper[4739]: I0218 14:05:10.919995 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lbspb"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.586406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" event={"ID":"eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39","Type":"ContainerDied","Data":"fae6dc1b6a99284726a5c316e9b142133b64b76e06f03661a6baf4b3e9620752"} Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.586461 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.586511 4739 scope.go:117] "RemoveContainer" containerID="8fe561d69997a42f05c72d8193b431b41c69814dd140f03816516811cdf03267" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.620360 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.637040 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-hkhdz"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674569 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:11 crc kubenswrapper[4739]: E0218 14:05:11.674811 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674826 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: E0218 14:05:11.674837 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674848 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674960 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" containerName="route-controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.674970 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" containerName="controller-manager" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.675396 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.677381 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.677985 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.678116 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.678281 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.678410 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.681740 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.684491 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.685102 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.687431 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.689762 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.690030 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.691143 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693324 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693506 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693650 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.694882 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.693846 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756841 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.756942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757203 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757275 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.757352 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.858440 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859574 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859718 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859911 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.859966 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860017 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.860951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.861142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.861769 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.862181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.865081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.866803 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.880778 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"route-controller-manager-5f87d8d559-8cvd2\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:11 crc kubenswrapper[4739]: I0218 14:05:11.882396 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"controller-manager-5b85c597f-7cj2x\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.010778 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.018796 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.205299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.242628 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.417259 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713" path="/var/lib/kubelet/pods/d88dbdf9-f0d5-44e2-91c8-6bcc8a6e3713/volumes" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.417920 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39" path="/var/lib/kubelet/pods/eba0da7f-a1b3-4d3b-8fbd-cdcc88efcc39/volumes" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.593197 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerStarted","Data":"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.593246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerStarted","Data":"859379702eb5973733471369b3a7d9b5d3eb03bf0ee5ef2eb69a21d044e09a3e"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.593533 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.595059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerStarted","Data":"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.595122 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerStarted","Data":"b1b185d98c36c27c5d4462426e4a18d83db79ec0473ca9aef0bf6917797ee642"} Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.595429 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.599376 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.611797 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" podStartSLOduration=2.611780084 podStartE2EDuration="2.611780084s" podCreationTimestamp="2026-02-18 14:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:12.611069856 +0000 UTC m=+345.106790778" watchObservedRunningTime="2026-02-18 14:05:12.611780084 +0000 UTC m=+345.107501006" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.657316 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" podStartSLOduration=2.6572976539999997 podStartE2EDuration="2.657297654s" podCreationTimestamp="2026-02-18 14:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:12.655253353 +0000 UTC m=+345.150974275" watchObservedRunningTime="2026-02-18 14:05:12.657297654 +0000 UTC m=+345.153018576" Feb 18 14:05:12 crc kubenswrapper[4739]: I0218 14:05:12.830805 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:23 crc kubenswrapper[4739]: I0218 14:05:23.721251 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:23 crc kubenswrapper[4739]: E0218 14:05:23.721428 4739 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:23 crc kubenswrapper[4739]: E0218 14:05:23.722046 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates podName:26e9543b-d10d-461c-8751-99e53b680e1c nodeName:}" failed. No retries permitted until 2026-02-18 14:05:55.722021831 +0000 UTC m=+388.217742753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-kjphg" (UID: "26e9543b-d10d-461c-8751-99e53b680e1c") : secret "prometheus-operator-admission-webhook-tls" not found Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.068042 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.069048 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.071528 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.081555 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.139466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.139526 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.139556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241123 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241159 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.241862 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.264142 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4z7n"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.265313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"redhat-operators-n5478\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.265351 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.269538 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.275069 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4z7n"] Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.342788 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-catalog-content\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.343176 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-utilities\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.343208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kjm\" (UniqueName: \"kubernetes.io/projected/0cc54472-7fa4-457e-a332-420ce4a7da93-kube-api-access-c4kjm\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.396835 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.444774 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-utilities\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.444822 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4kjm\" (UniqueName: \"kubernetes.io/projected/0cc54472-7fa4-457e-a332-420ce4a7da93-kube-api-access-c4kjm\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.444878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-catalog-content\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.445435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-utilities\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.445600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc54472-7fa4-457e-a332-420ce4a7da93-catalog-content\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.469128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4kjm\" (UniqueName: \"kubernetes.io/projected/0cc54472-7fa4-457e-a332-420ce4a7da93-kube-api-access-c4kjm\") pod \"redhat-marketplace-p4z7n\" (UID: \"0cc54472-7fa4-457e-a332-420ce4a7da93\") " pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.603155 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:25 crc kubenswrapper[4739]: I0218 14:05:25.859425 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:05:25 crc kubenswrapper[4739]: W0218 14:05:25.865333 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb612bd_4974_4e9b_91d7_0240ce057aa5.slice/crio-81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca WatchSource:0}: Error finding container 81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca: Status 404 returned error can't find the container with id 81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.038097 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4z7n"] Feb 18 14:05:26 crc kubenswrapper[4739]: W0218 14:05:26.055767 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cc54472_7fa4_457e_a332_420ce4a7da93.slice/crio-0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c WatchSource:0}: Error finding container 0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c: Status 404 returned error can't find the container with id 0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.695312 4739 generic.go:334] "Generic (PLEG): container finished" podID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerID="cd68ab8027f647103dec3361912c6740c7fe91057ba0556d4d221b3bd0864eff" exitCode=0 Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.695539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"cd68ab8027f647103dec3361912c6740c7fe91057ba0556d4d221b3bd0864eff"} Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.695730 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerStarted","Data":"81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca"} Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.700109 4739 generic.go:334] "Generic (PLEG): container finished" podID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerID="2d78331716a2f84a755f4a350cf5232ec80ebedd83e8ac65ef7e623049513e2d" exitCode=0 Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.700153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerDied","Data":"2d78331716a2f84a755f4a350cf5232ec80ebedd83e8ac65ef7e623049513e2d"} Feb 18 14:05:26 crc kubenswrapper[4739]: I0218 14:05:26.700184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerStarted","Data":"0d616990b305a6fbcf0beb47439116c703d5adf6230960553ea209ae19651d9c"} Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.474164 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v6sbz"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.475079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: W0218 14:05:27.477157 4739 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Feb 18 14:05:27 crc kubenswrapper[4739]: E0218 14:05:27.477316 4739 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.517876 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6sbz"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.580751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-catalog-content\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.581094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9wgv\" (UniqueName: \"kubernetes.io/projected/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-kube-api-access-d9wgv\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.581156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-utilities\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-catalog-content\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9wgv\" (UniqueName: \"kubernetes.io/projected/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-kube-api-access-d9wgv\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-utilities\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.682700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-catalog-content\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.684356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-utilities\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.695053 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.696408 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.699090 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.704517 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.705629 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9wgv\" (UniqueName: \"kubernetes.io/projected/c0ff243b-1f5d-4ab1-af8c-38a98b870d27-kube-api-access-d9wgv\") pod \"certified-operators-v6sbz\" (UID: \"c0ff243b-1f5d-4ab1-af8c-38a98b870d27\") " pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.708080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerStarted","Data":"eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22"} Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.710333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerStarted","Data":"0ef00bb43e458bf2050b4f932e4a377fa199e86da31eba836458fbe900607947"} Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.783412 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.783492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.783538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.884536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.884583 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.884644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.885131 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.885307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:27 crc kubenswrapper[4739]: I0218 14:05:27.905289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"community-operators-94tzm\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.099894 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.317918 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.317958 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.537960 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.717278 4739 generic.go:334] "Generic (PLEG): container finished" podID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerID="eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22" exitCode=0 Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.717360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.719171 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.719078 4739 generic.go:334] "Generic (PLEG): container finished" podID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" exitCode=0 Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.719322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerStarted","Data":"9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.722426 4739 generic.go:334] "Generic (PLEG): container finished" podID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerID="0ef00bb43e458bf2050b4f932e4a377fa199e86da31eba836458fbe900607947" exitCode=0 Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.722470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerDied","Data":"0ef00bb43e458bf2050b4f932e4a377fa199e86da31eba836458fbe900607947"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.722493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4z7n" event={"ID":"0cc54472-7fa4-457e-a332-420ce4a7da93","Type":"ContainerStarted","Data":"34c0039c5c354e86e2b1d1d3fbf6d5fcc9f2e4f0b922df5cb3730e4347df63f4"} Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.769482 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6sbz"] Feb 18 14:05:28 crc kubenswrapper[4739]: I0218 14:05:28.771557 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4z7n" podStartSLOduration=2.313102586 podStartE2EDuration="3.771533453s" podCreationTimestamp="2026-02-18 14:05:25 +0000 UTC" firstStartedPulling="2026-02-18 14:05:26.701839592 +0000 UTC m=+359.197560524" lastFinishedPulling="2026-02-18 14:05:28.160270469 +0000 UTC m=+360.655991391" observedRunningTime="2026-02-18 14:05:28.768002234 +0000 UTC m=+361.263723166" watchObservedRunningTime="2026-02-18 14:05:28.771533453 +0000 UTC m=+361.267254385" Feb 18 14:05:28 crc kubenswrapper[4739]: W0218 14:05:28.776581 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ff243b_1f5d_4ab1_af8c_38a98b870d27.slice/crio-8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2 WatchSource:0}: Error finding container 8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2: Status 404 returned error can't find the container with id 8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2 Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.372731 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.373393 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.729030 4739 generic.go:334] "Generic (PLEG): container finished" podID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerID="9c52441e88eb1150b26b8bccb866ea4c8f6076109e0e9fe0290ac66f558571ef" exitCode=0 Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.729103 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerDied","Data":"9c52441e88eb1150b26b8bccb866ea4c8f6076109e0e9fe0290ac66f558571ef"} Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.729130 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerStarted","Data":"8559ec93d437695a8a52075d9252ff09f0cddd6be5ff8eaeaa628e40537918d2"} Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.734569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerStarted","Data":"65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64"} Feb 18 14:05:29 crc kubenswrapper[4739]: I0218 14:05:29.783968 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n5478" podStartSLOduration=2.35602148 podStartE2EDuration="4.783942535s" podCreationTimestamp="2026-02-18 14:05:25 +0000 UTC" firstStartedPulling="2026-02-18 14:05:26.699920744 +0000 UTC m=+359.195641686" lastFinishedPulling="2026-02-18 14:05:29.127841819 +0000 UTC m=+361.623562741" observedRunningTime="2026-02-18 14:05:29.775190506 +0000 UTC m=+362.270911428" watchObservedRunningTime="2026-02-18 14:05:29.783942535 +0000 UTC m=+362.279663457" Feb 18 14:05:30 crc kubenswrapper[4739]: I0218 14:05:30.163663 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:30 crc kubenswrapper[4739]: I0218 14:05:30.164248 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" containerID="cri-o://a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" gracePeriod=30 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.732686 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742747 4739 generic.go:334] "Generic (PLEG): container finished" podID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" exitCode=0 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742814 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742849 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerDied","Data":"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b85c597f-7cj2x" event={"ID":"386aca13-7178-47f2-bf26-bb78e5c5ff49","Type":"ContainerDied","Data":"859379702eb5973733471369b3a7d9b5d3eb03bf0ee5ef2eb69a21d044e09a3e"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.742914 4739 scope.go:117] "RemoveContainer" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.751258 4739 generic.go:334] "Generic (PLEG): container finished" podID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" exitCode=0 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.751305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.774526 4739 scope.go:117] "RemoveContainer" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" Feb 18 14:05:31 crc kubenswrapper[4739]: E0218 14:05:30.781653 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b\": container with ID starting with a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b not found: ID does not exist" containerID="a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.781734 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b"} err="failed to get container status \"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b\": rpc error: code = NotFound desc = could not find container \"a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b\": container with ID starting with a70ceb0e6b53b01055b927d16038806c2e481ffd70d9fa86d9292bd4e2dec66b not found: ID does not exist" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821711 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821752 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821888 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.821919 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") pod \"386aca13-7178-47f2-bf26-bb78e5c5ff49\" (UID: \"386aca13-7178-47f2-bf26-bb78e5c5ff49\") " Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.823140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.823276 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config" (OuterVolumeSpecName: "config") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.823483 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca" (OuterVolumeSpecName: "client-ca") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.829589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54" (OuterVolumeSpecName: "kube-api-access-fxp54") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "kube-api-access-fxp54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.829835 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "386aca13-7178-47f2-bf26-bb78e5c5ff49" (UID: "386aca13-7178-47f2-bf26-bb78e5c5ff49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.923971 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/386aca13-7178-47f2-bf26-bb78e5c5ff49-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924022 4739 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924049 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924070 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxp54\" (UniqueName: \"kubernetes.io/projected/386aca13-7178-47f2-bf26-bb78e5c5ff49-kube-api-access-fxp54\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:30.924088 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/386aca13-7178-47f2-bf26-bb78e5c5ff49-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.139926 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.147600 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b85c597f-7cj2x"] Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.699112 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b7465fb97-9dgmn"] Feb 18 14:05:31 crc kubenswrapper[4739]: E0218 14:05:31.699644 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.699667 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.699800 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" containerName="controller-manager" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.700237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.702143 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.702804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.703299 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.703633 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.704021 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.704810 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.715340 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.769069 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b7465fb97-9dgmn"] Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.789006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerStarted","Data":"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.792395 4739 generic.go:334] "Generic (PLEG): container finished" podID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerID="a91a155362822a8fd7463aee53a03bcce527fc96711ba76c83d138a4ccc3acb5" exitCode=0 Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.792454 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerDied","Data":"a91a155362822a8fd7463aee53a03bcce527fc96711ba76c83d138a4ccc3acb5"} Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.809517 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94tzm" podStartSLOduration=2.381209286 podStartE2EDuration="4.809497s" podCreationTimestamp="2026-02-18 14:05:27 +0000 UTC" firstStartedPulling="2026-02-18 14:05:28.720416072 +0000 UTC m=+361.216136994" lastFinishedPulling="2026-02-18 14:05:31.148703776 +0000 UTC m=+363.644424708" observedRunningTime="2026-02-18 14:05:31.809110801 +0000 UTC m=+364.304831743" watchObservedRunningTime="2026-02-18 14:05:31.809497 +0000 UTC m=+364.305217942" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835661 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-client-ca\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-config\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835828 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-proxy-ca-bundles\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.835861 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8p78\" (UniqueName: \"kubernetes.io/projected/0480fc06-58bc-47d0-9446-8eb7ecad6509-kube-api-access-s8p78\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.836029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480fc06-58bc-47d0-9446-8eb7ecad6509-serving-cert\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480fc06-58bc-47d0-9446-8eb7ecad6509-serving-cert\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-client-ca\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937129 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-config\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937178 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-proxy-ca-bundles\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.937235 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8p78\" (UniqueName: \"kubernetes.io/projected/0480fc06-58bc-47d0-9446-8eb7ecad6509-kube-api-access-s8p78\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.938297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-client-ca\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.938364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-proxy-ca-bundles\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.938393 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0480fc06-58bc-47d0-9446-8eb7ecad6509-config\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.940971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0480fc06-58bc-47d0-9446-8eb7ecad6509-serving-cert\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:31 crc kubenswrapper[4739]: I0218 14:05:31.964205 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8p78\" (UniqueName: \"kubernetes.io/projected/0480fc06-58bc-47d0-9446-8eb7ecad6509-kube-api-access-s8p78\") pod \"controller-manager-7b7465fb97-9dgmn\" (UID: \"0480fc06-58bc-47d0-9446-8eb7ecad6509\") " pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.082235 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.277820 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b7465fb97-9dgmn"] Feb 18 14:05:32 crc kubenswrapper[4739]: W0218 14:05:32.299982 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0480fc06_58bc_47d0_9446_8eb7ecad6509.slice/crio-a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa WatchSource:0}: Error finding container a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa: Status 404 returned error can't find the container with id a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.422290 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="386aca13-7178-47f2-bf26-bb78e5c5ff49" path="/var/lib/kubelet/pods/386aca13-7178-47f2-bf26-bb78e5c5ff49/volumes" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.799508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6sbz" event={"ID":"c0ff243b-1f5d-4ab1-af8c-38a98b870d27","Type":"ContainerStarted","Data":"c363a555f5e8acbad8a6089a475de441ded4bbf447b365999623b5505b377d45"} Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.803717 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerStarted","Data":"54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7"} Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.803770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerStarted","Data":"a27ed670297620582ff5610a017e8903b70628a9ba4b3a767681a18df975e7aa"} Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.842163 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podStartSLOduration=2.84214617 podStartE2EDuration="2.84214617s" podCreationTimestamp="2026-02-18 14:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:32.842091688 +0000 UTC m=+365.337812610" watchObservedRunningTime="2026-02-18 14:05:32.84214617 +0000 UTC m=+365.337867092" Feb 18 14:05:32 crc kubenswrapper[4739]: I0218 14:05:32.843474 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v6sbz" podStartSLOduration=3.267079208 podStartE2EDuration="5.843469343s" podCreationTimestamp="2026-02-18 14:05:27 +0000 UTC" firstStartedPulling="2026-02-18 14:05:29.73065215 +0000 UTC m=+362.226373072" lastFinishedPulling="2026-02-18 14:05:32.307042285 +0000 UTC m=+364.802763207" observedRunningTime="2026-02-18 14:05:32.820004326 +0000 UTC m=+365.315725248" watchObservedRunningTime="2026-02-18 14:05:32.843469343 +0000 UTC m=+365.339190265" Feb 18 14:05:33 crc kubenswrapper[4739]: I0218 14:05:33.808819 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:33 crc kubenswrapper[4739]: I0218 14:05:33.815417 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.397379 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.397776 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.604099 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.604621 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.659870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:35 crc kubenswrapper[4739]: I0218 14:05:35.873729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4z7n" Feb 18 14:05:36 crc kubenswrapper[4739]: I0218 14:05:36.442074 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n5478" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" probeResult="failure" output=< Feb 18 14:05:36 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:05:36 crc kubenswrapper[4739]: > Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.100625 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.100697 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.172744 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.318137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.318218 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.369086 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.898508 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:05:38 crc kubenswrapper[4739]: I0218 14:05:38.898972 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v6sbz" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.295759 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nt8mp"] Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.296986 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.320904 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nt8mp"] Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418248 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-bound-sa-token\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418296 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs7fj\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-kube-api-access-fs7fj\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-trusted-ca\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-tls\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418846 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-certificates\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418892 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.418930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.441069 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520278 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-tls\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-certificates\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520393 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-bound-sa-token\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520455 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs7fj\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-kube-api-access-fs7fj\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.520525 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-trusted-ca\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.521304 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.521854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-certificates\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.521852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-trusted-ca\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.525966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-registry-tls\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.525979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.536647 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-bound-sa-token\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.548085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs7fj\" (UniqueName: \"kubernetes.io/projected/098619ca-afc3-4ac2-9ef5-1bc0ecac6a02-kube-api-access-fs7fj\") pod \"image-registry-66df7c8f76-nt8mp\" (UID: \"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02\") " pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:43 crc kubenswrapper[4739]: I0218 14:05:43.612248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.017495 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nt8mp"] Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.887140 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" event={"ID":"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02","Type":"ContainerStarted","Data":"c3155104d416a7a43bd0d77b73d5d690686cc7a402700106dc139bd5e68d790f"} Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.887521 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.887534 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" event={"ID":"098619ca-afc3-4ac2-9ef5-1bc0ecac6a02","Type":"ContainerStarted","Data":"1459845147c9ed89a02f07910c0f67b6c852b30c9394b0ad99b1a23571832593"} Feb 18 14:05:44 crc kubenswrapper[4739]: I0218 14:05:44.907619 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" podStartSLOduration=1.907590756 podStartE2EDuration="1.907590756s" podCreationTimestamp="2026-02-18 14:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:44.906334604 +0000 UTC m=+377.402055536" watchObservedRunningTime="2026-02-18 14:05:44.907590756 +0000 UTC m=+377.403311708" Feb 18 14:05:45 crc kubenswrapper[4739]: I0218 14:05:45.467240 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:45 crc kubenswrapper[4739]: I0218 14:05:45.506318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.165480 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.166226 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" containerID="cri-o://0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" gracePeriod=30 Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.653786 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736509 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736550 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.736586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") pod \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\" (UID: \"adb7e32d-b0a0-48cd-9bd0-03a390dcead5\") " Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.737353 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca" (OuterVolumeSpecName: "client-ca") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.737398 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config" (OuterVolumeSpecName: "config") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.742364 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.742885 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r" (OuterVolumeSpecName: "kube-api-access-24w9r") pod "adb7e32d-b0a0-48cd-9bd0-03a390dcead5" (UID: "adb7e32d-b0a0-48cd-9bd0-03a390dcead5"). InnerVolumeSpecName "kube-api-access-24w9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838215 4739 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838264 4739 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838274 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.838284 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24w9r\" (UniqueName: \"kubernetes.io/projected/adb7e32d-b0a0-48cd-9bd0-03a390dcead5-kube-api-access-24w9r\") on node \"crc\" DevicePath \"\"" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.928891 4739 generic.go:334] "Generic (PLEG): container finished" podID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" exitCode=0 Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.928958 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerDied","Data":"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446"} Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.929003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" event={"ID":"adb7e32d-b0a0-48cd-9bd0-03a390dcead5","Type":"ContainerDied","Data":"b1b185d98c36c27c5d4462426e4a18d83db79ec0473ca9aef0bf6917797ee642"} Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.929035 4739 scope.go:117] "RemoveContainer" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.929207 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.965382 4739 scope.go:117] "RemoveContainer" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" Feb 18 14:05:50 crc kubenswrapper[4739]: E0218 14:05:50.965978 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446\": container with ID starting with 0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446 not found: ID does not exist" containerID="0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.970213 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446"} err="failed to get container status \"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446\": rpc error: code = NotFound desc = could not find container \"0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446\": container with ID starting with 0b5d6a9e53135725376f795c6e765dedc75e3c80bd1d9eb0d0c0612648010446 not found: ID does not exist" Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.986339 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:50 crc kubenswrapper[4739]: I0218 14:05:50.992190 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f87d8d559-8cvd2"] Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.713412 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5"] Feb 18 14:05:51 crc kubenswrapper[4739]: E0218 14:05:51.714302 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.714362 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.715746 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" containerName="route-controller-manager" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.717293 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.720820 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.720988 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.722409 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.722658 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.723098 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.724019 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.732327 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5"] Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-client-ca\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxdc\" (UniqueName: \"kubernetes.io/projected/8166ccce-dd66-40c5-aed1-8f560c573a6e-kube-api-access-hdxdc\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-config\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.755880 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8166ccce-dd66-40c5-aed1-8f560c573a6e-serving-cert\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.857692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8166ccce-dd66-40c5-aed1-8f560c573a6e-serving-cert\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.857859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-client-ca\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.857914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxdc\" (UniqueName: \"kubernetes.io/projected/8166ccce-dd66-40c5-aed1-8f560c573a6e-kube-api-access-hdxdc\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.858029 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-config\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.860077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-client-ca\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.863163 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8166ccce-dd66-40c5-aed1-8f560c573a6e-serving-cert\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.873885 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8166ccce-dd66-40c5-aed1-8f560c573a6e-config\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:51 crc kubenswrapper[4739]: I0218 14:05:51.888321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxdc\" (UniqueName: \"kubernetes.io/projected/8166ccce-dd66-40c5-aed1-8f560c573a6e-kube-api-access-hdxdc\") pod \"route-controller-manager-77ddcd9567-p8jx5\" (UID: \"8166ccce-dd66-40c5-aed1-8f560c573a6e\") " pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.047256 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.418674 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb7e32d-b0a0-48cd-9bd0-03a390dcead5" path="/var/lib/kubelet/pods/adb7e32d-b0a0-48cd-9bd0-03a390dcead5/volumes" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.530972 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5"] Feb 18 14:05:52 crc kubenswrapper[4739]: W0218 14:05:52.536263 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8166ccce_dd66_40c5_aed1_8f560c573a6e.slice/crio-fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c WatchSource:0}: Error finding container fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c: Status 404 returned error can't find the container with id fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.946834 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerStarted","Data":"56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e"} Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.947397 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.947421 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerStarted","Data":"fce6c45cf8aa01bbe494283538ef37d4a1c9b8c4fad8431327e9186f35ee3f9c"} Feb 18 14:05:52 crc kubenswrapper[4739]: I0218 14:05:52.979679 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podStartSLOduration=2.979649659 podStartE2EDuration="2.979649659s" podCreationTimestamp="2026-02-18 14:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:05:52.97410014 +0000 UTC m=+385.469821102" watchObservedRunningTime="2026-02-18 14:05:52.979649659 +0000 UTC m=+385.475370621" Feb 18 14:05:53 crc kubenswrapper[4739]: I0218 14:05:53.304057 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 14:05:55 crc kubenswrapper[4739]: I0218 14:05:55.727386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:55 crc kubenswrapper[4739]: I0218 14:05:55.739794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/26e9543b-d10d-461c-8751-99e53b680e1c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-kjphg\" (UID: \"26e9543b-d10d-461c-8751-99e53b680e1c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:55 crc kubenswrapper[4739]: I0218 14:05:55.859290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:56 crc kubenswrapper[4739]: I0218 14:05:56.132956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg"] Feb 18 14:05:56 crc kubenswrapper[4739]: W0218 14:05:56.140755 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26e9543b_d10d_461c_8751_99e53b680e1c.slice/crio-7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e WatchSource:0}: Error finding container 7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e: Status 404 returned error can't find the container with id 7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e Feb 18 14:05:56 crc kubenswrapper[4739]: I0218 14:05:56.972886 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerStarted","Data":"7bcf97552da176b1e2d8eef34f86b9670ab582c0af79a04a2cda1ffd58dc145e"} Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.981587 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerStarted","Data":"426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c"} Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.982000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.987910 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 14:05:57 crc kubenswrapper[4739]: I0218 14:05:57.997979 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podStartSLOduration=65.77264438 podStartE2EDuration="1m6.997957388s" podCreationTimestamp="2026-02-18 14:04:51 +0000 UTC" firstStartedPulling="2026-02-18 14:05:56.142660269 +0000 UTC m=+388.638381191" lastFinishedPulling="2026-02-18 14:05:57.367973257 +0000 UTC m=+389.863694199" observedRunningTime="2026-02-18 14:05:57.99761424 +0000 UTC m=+390.493335182" watchObservedRunningTime="2026-02-18 14:05:57.997957388 +0000 UTC m=+390.493678360" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.004098 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gd5xj"] Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.006777 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.008815 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.009297 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.010268 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.011231 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-k7qcm" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.011283 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gd5xj"] Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081694 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/946f8cb5-95e0-4850-a7ee-9be202a85f4d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081837 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.081910 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9sbr\" (UniqueName: \"kubernetes.io/projected/946f8cb5-95e0-4850-a7ee-9be202a85f4d-kube-api-access-w9sbr\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183610 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9sbr\" (UniqueName: \"kubernetes.io/projected/946f8cb5-95e0-4850-a7ee-9be202a85f4d-kube-api-access-w9sbr\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183691 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183769 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/946f8cb5-95e0-4850-a7ee-9be202a85f4d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.183799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.185644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/946f8cb5-95e0-4850-a7ee-9be202a85f4d-metrics-client-ca\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.190052 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.190515 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/946f8cb5-95e0-4850-a7ee-9be202a85f4d-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.208903 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9sbr\" (UniqueName: \"kubernetes.io/projected/946f8cb5-95e0-4850-a7ee-9be202a85f4d-kube-api-access-w9sbr\") pod \"prometheus-operator-db54df47d-gd5xj\" (UID: \"946f8cb5-95e0-4850-a7ee-9be202a85f4d\") " pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.338964 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.372667 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.372754 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.372827 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.373748 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.373826 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6" gracePeriod=600 Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.756939 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-gd5xj"] Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.994937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" event={"ID":"946f8cb5-95e0-4850-a7ee-9be202a85f4d","Type":"ContainerStarted","Data":"71cfd4bd5dd7b8ef13581fb394f135c9502646ef56e20b94b7cf463404dc6758"} Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998195 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6" exitCode=0 Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6"} Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775"} Feb 18 14:05:59 crc kubenswrapper[4739]: I0218 14:05:59.998499 4739 scope.go:117] "RemoveContainer" containerID="3dcab1d80fdf8797a51bc2ce757130e9cc56fd38fc87ddd1aa1b8e88465373e4" Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.018841 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" event={"ID":"946f8cb5-95e0-4850-a7ee-9be202a85f4d","Type":"ContainerStarted","Data":"c38be43b5695bcf43fbf84bf0ed166fb90f88397606d0ac205e19aec2e5eab1d"} Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.019525 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" event={"ID":"946f8cb5-95e0-4850-a7ee-9be202a85f4d","Type":"ContainerStarted","Data":"242c926cf29682743ea13819c13856a2f7970236a671ddd67031f3418716a76f"} Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.038485 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-gd5xj" podStartSLOduration=2.486450472 podStartE2EDuration="5.038466365s" podCreationTimestamp="2026-02-18 14:05:58 +0000 UTC" firstStartedPulling="2026-02-18 14:05:59.766108725 +0000 UTC m=+392.261829647" lastFinishedPulling="2026-02-18 14:06:02.318124618 +0000 UTC m=+394.813845540" observedRunningTime="2026-02-18 14:06:03.033716306 +0000 UTC m=+395.529437268" watchObservedRunningTime="2026-02-18 14:06:03.038466365 +0000 UTC m=+395.534187297" Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.620590 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nt8mp" Feb 18 14:06:03 crc kubenswrapper[4739]: I0218 14:06:03.705228 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.342326 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.343784 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.346638 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w4vvt" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.346629 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.346896 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.355692 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.373574 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.374967 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.377187 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.377284 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.377403 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.379506 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-mzkwp" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.398885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5b6adab-49f6-447e-a865-222633a2f9fd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.398936 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.398977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.399209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghmq\" (UniqueName: \"kubernetes.io/projected/b5b6adab-49f6-447e-a865-222633a2f9fd-kube-api-access-xghmq\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.409230 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-2r9b6"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.410769 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.413077 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.413310 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.414429 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-8hvgw" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.441037 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7"] Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-tls\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6hc2\" (UniqueName: \"kubernetes.io/projected/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-kube-api-access-n6hc2\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500505 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500533 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500637 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-metrics-client-ca\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500668 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9585\" (UniqueName: \"kubernetes.io/projected/b380310c-1045-470c-a5c7-25b4357c11c7-kube-api-access-b9585\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b380310c-1045-470c-a5c7-25b4357c11c7-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500793 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xghmq\" (UniqueName: \"kubernetes.io/projected/b5b6adab-49f6-447e-a865-222633a2f9fd-kube-api-access-xghmq\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500871 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-root\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-wtmp\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500925 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-sys\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-textfile\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.500980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5b6adab-49f6-447e-a865-222633a2f9fd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.502663 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b5b6adab-49f6-447e-a865-222633a2f9fd-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.507986 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.520066 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xghmq\" (UniqueName: \"kubernetes.io/projected/b5b6adab-49f6-447e-a865-222633a2f9fd-kube-api-access-xghmq\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.520490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b5b6adab-49f6-447e-a865-222633a2f9fd-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-5xb2t\" (UID: \"b5b6adab-49f6-447e-a865-222633a2f9fd\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.601927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9585\" (UniqueName: \"kubernetes.io/projected/b380310c-1045-470c-a5c7-25b4357c11c7-kube-api-access-b9585\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602730 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b380310c-1045-470c-a5c7-25b4357c11c7-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-root\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602812 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-sys\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602831 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-wtmp\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-textfile\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-tls\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602887 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6hc2\" (UniqueName: \"kubernetes.io/projected/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-kube-api-access-n6hc2\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602926 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.602962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-metrics-client-ca\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603045 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-sys\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-textfile\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.603937 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-wtmp\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-root\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: E0218 14:06:05.604021 4739 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 18 14:06:05 crc kubenswrapper[4739]: E0218 14:06:05.604129 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls podName:b380310c-1045-470c-a5c7-25b4357c11c7 nodeName:}" failed. No retries permitted until 2026-02-18 14:06:06.10410169 +0000 UTC m=+398.599822612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-gp8q7" (UID: "b380310c-1045-470c-a5c7-25b4357c11c7") : secret "kube-state-metrics-tls" not found Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/b380310c-1045-470c-a5c7-25b4357c11c7-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-metrics-client-ca\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.604853 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.605249 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.607974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.609889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-node-exporter-tls\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.610424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.617958 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9585\" (UniqueName: \"kubernetes.io/projected/b380310c-1045-470c-a5c7-25b4357c11c7-kube-api-access-b9585\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.624341 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6hc2\" (UniqueName: \"kubernetes.io/projected/ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc-kube-api-access-n6hc2\") pod \"node-exporter-2r9b6\" (UID: \"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc\") " pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.660519 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" Feb 18 14:06:05 crc kubenswrapper[4739]: I0218 14:06:05.731791 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-2r9b6" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.036633 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerStarted","Data":"5e83b1ff5684d5eaf38ce92f7c04c64e4d742069fcfa624eea949d96a539896e"} Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.115922 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.122872 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b380310c-1045-470c-a5c7-25b4357c11c7-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-gp8q7\" (UID: \"b380310c-1045-470c-a5c7-25b4357c11c7\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.153302 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.292694 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.497552 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.503338 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509263 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wncft" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509486 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509816 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.509946 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.510062 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.510560 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.510736 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.519371 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.521426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627723 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627788 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627821 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627841 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-out\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-466fx\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-kube-api-access-466fx\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627891 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-volume\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627916 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-web-config\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.627994 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729180 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-volume\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-web-config\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729321 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729373 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729460 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-466fx\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-kube-api-access-466fx\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.729476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-out\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.731596 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.732912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.733433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23577b5e-feaf-46c2-973a-8aea75a6dbe0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.734701 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-out\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.735586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.737357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-config-volume\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.737699 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.738082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-web-config\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.748406 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.751542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-466fx\" (UniqueName: \"kubernetes.io/projected/23577b5e-feaf-46c2-973a-8aea75a6dbe0-kube-api-access-466fx\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.752953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.753644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/23577b5e-feaf-46c2-973a-8aea75a6dbe0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"23577b5e-feaf-46c2-973a-8aea75a6dbe0\") " pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.833242 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7"] Feb 18 14:06:06 crc kubenswrapper[4739]: I0218 14:06:06.868933 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 18 14:06:07 crc kubenswrapper[4739]: W0218 14:06:07.046432 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb380310c_1045_470c_a5c7_25b4357c11c7.slice/crio-e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2 WatchSource:0}: Error finding container e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2: Status 404 returned error can't find the container with id e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2 Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.049603 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"9dfd0fa5a32597e0b3004f34a635e107330f11d3131a44c6a358eddc9cb61ff0"} Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.049685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"0ff860a5f49e7f1e918f8b4e1acd640479249b5853f6a2e3469d22db9752f90c"} Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.049697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"107df11e96f7c64c8e78dd655d3add85c1c75a7371fa9048b22ad0ec3127551c"} Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.465702 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6d644458fc-hpxhn"] Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.468175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471170 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471315 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471340 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-rvww4" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471338 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8pgifqrph5csl" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.471886 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.472074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.473613 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.480492 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6d644458fc-hpxhn"] Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.494160 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 18 14:06:07 crc kubenswrapper[4739]: W0218 14:06:07.524516 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23577b5e_feaf_46c2_973a_8aea75a6dbe0.slice/crio-5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83 WatchSource:0}: Error finding container 5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83: Status 404 returned error can't find the container with id 5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83 Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.547835 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.547893 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97fx\" (UniqueName: \"kubernetes.io/projected/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-kube-api-access-t97fx\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.547919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548049 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-metrics-client-ca\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548150 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-grpc-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.548214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-grpc-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649538 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t97fx\" (UniqueName: \"kubernetes.io/projected/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-kube-api-access-t97fx\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649563 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.649644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-metrics-client-ca\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.650652 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-metrics-client-ca\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.655321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.655707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.655798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-grpc-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.656234 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.657825 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-tls\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.670114 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.675138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t97fx\" (UniqueName: \"kubernetes.io/projected/cd8f90ea-5539-40b0-ba4b-8b4465eae2dd-kube-api-access-t97fx\") pod \"thanos-querier-6d644458fc-hpxhn\" (UID: \"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd\") " pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:07 crc kubenswrapper[4739]: I0218 14:06:07.790028 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.061849 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"5e4b1b4589bf2462ad8534d661802e96fb267fa913d82a8d56f711f3ae044c83"} Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.064546 4739 generic.go:334] "Generic (PLEG): container finished" podID="ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc" containerID="fe60499bb2da428f352809855a064c61a9d0e14436f7f4ef2376c634e3d8b38b" exitCode=0 Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.064579 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerDied","Data":"fe60499bb2da428f352809855a064c61a9d0e14436f7f4ef2376c634e3d8b38b"} Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.065666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"e79da37e61532bc163efe7ee0224fdb88e0004b638b8606c18fb797aa63e75a2"} Feb 18 14:06:08 crc kubenswrapper[4739]: I0218 14:06:08.434545 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6d644458fc-hpxhn"] Feb 18 14:06:08 crc kubenswrapper[4739]: W0218 14:06:08.716865 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8f90ea_5539_40b0_ba4b_8b4465eae2dd.slice/crio-0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95 WatchSource:0}: Error finding container 0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95: Status 404 returned error can't find the container with id 0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95 Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.081186 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerStarted","Data":"3c35f3f4b0768115712809cb0d172cc479ba567d2829b39b440076c4127765f7"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.081275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-2r9b6" event={"ID":"ef2c2363-ad01-4952-bc8c-88ebd9a7e4cc","Type":"ContainerStarted","Data":"c3d4df0fc85b60e06ddab7dfdb03e906332cd5f2cc72ab85cf4c81f27a2d4f9a"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.083736 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"0a7bf97d64c05ea558dbd69fd9b2c0c1a2e3f79e5bfa4e3f168a776e524fde95"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.093314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" event={"ID":"b5b6adab-49f6-447e-a865-222633a2f9fd","Type":"ContainerStarted","Data":"4435d03814348fab5a9d4afd861266e4966bd64b11596e3129851d5321c330ed"} Feb 18 14:06:09 crc kubenswrapper[4739]: I0218 14:06:09.104741 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-2r9b6" podStartSLOduration=2.76643981 podStartE2EDuration="4.104720808s" podCreationTimestamp="2026-02-18 14:06:05 +0000 UTC" firstStartedPulling="2026-02-18 14:06:05.799963206 +0000 UTC m=+398.295684128" lastFinishedPulling="2026-02-18 14:06:07.138244204 +0000 UTC m=+399.633965126" observedRunningTime="2026-02-18 14:06:09.103410455 +0000 UTC m=+401.599131377" watchObservedRunningTime="2026-02-18 14:06:09.104720808 +0000 UTC m=+401.600441730" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.127503 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"c2ab5ac8c968bd33ba909e3d0d133bc92c17571e653b510b3e189a83c2b3e89e"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.127863 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"340f6b39a4eb6047108dab3f107a8d56499bc879335305388f13804d156c5d00"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.127890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" event={"ID":"b380310c-1045-470c-a5c7-25b4357c11c7","Type":"ContainerStarted","Data":"027a676ac40fbf4051d9a46d65791e5f47c216bb0f23ba25ae83e6122191cb43"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.143003 4739 generic.go:334] "Generic (PLEG): container finished" podID="23577b5e-feaf-46c2-973a-8aea75a6dbe0" containerID="a26df73dda58372eb291701cee940e2a7f28fa5d5cda0d1580136449df84b43e" exitCode=0 Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.143135 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerDied","Data":"a26df73dda58372eb291701cee940e2a7f28fa5d5cda0d1580136449df84b43e"} Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.198745 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-5xb2t" podStartSLOduration=3.66774244 podStartE2EDuration="5.198714305s" podCreationTimestamp="2026-02-18 14:06:05 +0000 UTC" firstStartedPulling="2026-02-18 14:06:06.530150239 +0000 UTC m=+399.025871161" lastFinishedPulling="2026-02-18 14:06:08.061122114 +0000 UTC m=+400.556843026" observedRunningTime="2026-02-18 14:06:09.123045367 +0000 UTC m=+401.618766309" watchObservedRunningTime="2026-02-18 14:06:10.198714305 +0000 UTC m=+402.694435227" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.199038 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-gp8q7" podStartSLOduration=3.139465146 podStartE2EDuration="5.199033513s" podCreationTimestamp="2026-02-18 14:06:05 +0000 UTC" firstStartedPulling="2026-02-18 14:06:07.049297385 +0000 UTC m=+399.545018307" lastFinishedPulling="2026-02-18 14:06:09.108865752 +0000 UTC m=+401.604586674" observedRunningTime="2026-02-18 14:06:10.151787649 +0000 UTC m=+402.647508571" watchObservedRunningTime="2026-02-18 14:06:10.199033513 +0000 UTC m=+402.694754435" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.205487 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.206302 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.219025 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317795 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317826 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.317969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.318013 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.318078 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.318126 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419743 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419783 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419919 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.419953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.420243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.420829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.420949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.421702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.423875 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.425675 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.439337 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"console-58d7d9b477-pcf5b\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.530511 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.785997 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-f5c56b6cc-ft74f"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.787011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.790675 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.790949 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791028 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-k4d7v" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791086 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791143 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2mqmnq5hghn7e" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.791381 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.807373 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f5c56b6cc-ft74f"] Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-server-tls\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-client-certs\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ac03ed3e-3bdc-48cd-bf95-119b31b15208-audit-log\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826629 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-client-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826688 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz996\" (UniqueName: \"kubernetes.io/projected/ac03ed3e-3bdc-48cd-bf95-119b31b15208-kube-api-access-tz996\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.826737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-metrics-server-audit-profiles\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.927884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ac03ed3e-3bdc-48cd-bf95-119b31b15208-audit-log\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-client-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928304 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz996\" (UniqueName: \"kubernetes.io/projected/ac03ed3e-3bdc-48cd-bf95-119b31b15208-kube-api-access-tz996\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928333 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-metrics-server-audit-profiles\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928432 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-server-tls\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.928501 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-client-certs\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.929912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-metrics-server-audit-profiles\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.930349 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac03ed3e-3bdc-48cd-bf95-119b31b15208-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.930472 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ac03ed3e-3bdc-48cd-bf95-119b31b15208-audit-log\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.934129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-server-tls\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.939333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-secret-metrics-client-certs\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.944136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz996\" (UniqueName: \"kubernetes.io/projected/ac03ed3e-3bdc-48cd-bf95-119b31b15208-kube-api-access-tz996\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:10 crc kubenswrapper[4739]: I0218 14:06:10.947073 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac03ed3e-3bdc-48cd-bf95-119b31b15208-client-ca-bundle\") pod \"metrics-server-f5c56b6cc-ft74f\" (UID: \"ac03ed3e-3bdc-48cd-bf95-119b31b15208\") " pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.003221 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.109594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.173356 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.174349 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.180995 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.183736 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.187702 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.232656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c89fd8-2d23-4587-a802-4c07ad76bcd7-monitoring-plugin-cert\") pod \"monitoring-plugin-58bc79f98c-nzqw5\" (UID: \"34c89fd8-2d23-4587-a802-4c07ad76bcd7\") " pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.335402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c89fd8-2d23-4587-a802-4c07ad76bcd7-monitoring-plugin-cert\") pod \"monitoring-plugin-58bc79f98c-nzqw5\" (UID: \"34c89fd8-2d23-4587-a802-4c07ad76bcd7\") " pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.340904 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/34c89fd8-2d23-4587-a802-4c07ad76bcd7-monitoring-plugin-cert\") pod \"monitoring-plugin-58bc79f98c-nzqw5\" (UID: \"34c89fd8-2d23-4587-a802-4c07ad76bcd7\") " pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.500118 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.716253 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.718231 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.725966 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.731838 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.731995 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732077 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-g7bj36vt2qou" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732229 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732398 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732509 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732585 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bbn9z" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732701 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.732715 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.733312 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.734563 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745361 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745421 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745475 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-config-out\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745565 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745602 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.745680 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750194 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750322 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4lj2\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-kube-api-access-r4lj2\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750568 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750634 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-web-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750669 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.750700 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.756909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852838 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852907 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-web-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852927 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.852981 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853006 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-config-out\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853107 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853136 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853214 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.853228 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4lj2\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-kube-api-access-r4lj2\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.860329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.860490 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.860798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.861078 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.861728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.864161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-web-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.865776 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/22142e4b-3aae-4317-a2e5-2ad225fb7473-config-out\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.867342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.867838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/22142e4b-3aae-4317-a2e5-2ad225fb7473-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.868581 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.868939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.869644 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-config\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.870184 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.872951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4lj2\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-kube-api-access-r4lj2\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.873015 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.873438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.885710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/22142e4b-3aae-4317-a2e5-2ad225fb7473-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.887169 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/22142e4b-3aae-4317-a2e5-2ad225fb7473-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"22142e4b-3aae-4317-a2e5-2ad225fb7473\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:11 crc kubenswrapper[4739]: I0218 14:06:11.973622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f5c56b6cc-ft74f"] Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.050369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.178374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerStarted","Data":"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.179569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerStarted","Data":"be64644632065d655e0cde5e224a8ff692c5d059479e399bec230d76053c2d58"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.184420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"adb22130c24c318c77b58c719bdb88ab59a69125673e1949ad3756f934a71718"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.184468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"6a8a8672a915148dca3df8994d95e9107301dab90053a1361fa94db7214cf5e5"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.187123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerStarted","Data":"aa73099c96eeea9d12f2627be1ade2aa384673568666597342772c94f672b008"} Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.200000 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58d7d9b477-pcf5b" podStartSLOduration=2.199981471 podStartE2EDuration="2.199981471s" podCreationTimestamp="2026-02-18 14:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:06:12.197477909 +0000 UTC m=+404.693198861" watchObservedRunningTime="2026-02-18 14:06:12.199981471 +0000 UTC m=+404.695702403" Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.228091 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5"] Feb 18 14:06:12 crc kubenswrapper[4739]: I0218 14:06:12.489913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 18 14:06:12 crc kubenswrapper[4739]: W0218 14:06:12.500372 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22142e4b_3aae_4317_a2e5_2ad225fb7473.slice/crio-66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592 WatchSource:0}: Error finding container 66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592: Status 404 returned error can't find the container with id 66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592 Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.195314 4739 generic.go:334] "Generic (PLEG): container finished" podID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerID="4ad90f7d33e0eefe30e1cc97c8efb390ac3860abd15d101f1750012f570a18cd" exitCode=0 Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.195488 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerDied","Data":"4ad90f7d33e0eefe30e1cc97c8efb390ac3860abd15d101f1750012f570a18cd"} Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.195697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"66863606ffe35869ab0a46467632aa858d2745f77117b901e1f8e16d5a0a4592"} Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.209435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"49e8291bc5dc74ad5e84afc82a29ebc4a561079c55eb42533d8af56a03b4b9fe"} Feb 18 14:06:13 crc kubenswrapper[4739]: I0218 14:06:13.210796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" event={"ID":"34c89fd8-2d23-4587-a802-4c07ad76bcd7","Type":"ContainerStarted","Data":"61fb2e7a8265abc8a0269610f87de8bb97446fd356fa20de632ecd0d4ed3b102"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.221879 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerStarted","Data":"3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225650 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"d7d819a75ff16ae5fd1f51c469e10e3f03490067108b49ce9933d4320a1f1563"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"f293877fc2776f27dd857aabf840a661a55e7c2af8bf7dfa8f951d3c0b01263d"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"ff1b70884dec11ac788c995609cd5beebf00c9550a577182ae4459b42878b89e"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225698 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"dce215bcb597ebaa24774149cd0da8065088fcb0feb466d3c85c8d87c4c2142f"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225706 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"073271a4ef8cb53d82714dc7e915376681272babf13d94cf4df8a438d58aba8d"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.225714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"23577b5e-feaf-46c2-973a-8aea75a6dbe0","Type":"ContainerStarted","Data":"0c6e44b84e26e4b9c9521c432a29b331b099ef15fc2f7676b00406d68c3c71c0"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"1fcc315e5cb158fb4d26e7f06f27b9e1172813ead3e36a1d800348ed252007d7"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232104 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"c6b9b3933a5aa2fb26295fc546890c7b0b185d91c249c634977d990d526473c8"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" event={"ID":"cd8f90ea-5539-40b0-ba4b-8b4465eae2dd","Type":"ContainerStarted","Data":"23c5a95dc2b11d76ac644f4a0c724d3d25eaa3b45bf2335f9508f95756a8cb89"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.232972 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.235030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" event={"ID":"34c89fd8-2d23-4587-a802-4c07ad76bcd7","Type":"ContainerStarted","Data":"e30d333b2583ca5048cb59beefd346a122b6f9759bd6ef0a566af1a13b37d8d9"} Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.235558 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.243093 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podStartSLOduration=2.8170251 podStartE2EDuration="5.243081618s" podCreationTimestamp="2026-02-18 14:06:10 +0000 UTC" firstStartedPulling="2026-02-18 14:06:11.981107118 +0000 UTC m=+404.476828040" lastFinishedPulling="2026-02-18 14:06:14.407163636 +0000 UTC m=+406.902884558" observedRunningTime="2026-02-18 14:06:15.241381935 +0000 UTC m=+407.737102857" watchObservedRunningTime="2026-02-18 14:06:15.243081618 +0000 UTC m=+407.738802540" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.246940 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.261654 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podStartSLOduration=2.151216432 podStartE2EDuration="4.261631813s" podCreationTimestamp="2026-02-18 14:06:11 +0000 UTC" firstStartedPulling="2026-02-18 14:06:12.254174598 +0000 UTC m=+404.749895520" lastFinishedPulling="2026-02-18 14:06:14.364589979 +0000 UTC m=+406.860310901" observedRunningTime="2026-02-18 14:06:15.256646488 +0000 UTC m=+407.752367440" watchObservedRunningTime="2026-02-18 14:06:15.261631813 +0000 UTC m=+407.757352745" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.298730 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.461395193 podStartE2EDuration="9.298714832s" podCreationTimestamp="2026-02-18 14:06:06 +0000 UTC" firstStartedPulling="2026-02-18 14:06:07.527386603 +0000 UTC m=+400.023107535" lastFinishedPulling="2026-02-18 14:06:14.364706232 +0000 UTC m=+406.860427174" observedRunningTime="2026-02-18 14:06:15.293403109 +0000 UTC m=+407.789124041" watchObservedRunningTime="2026-02-18 14:06:15.298714832 +0000 UTC m=+407.794435764" Feb 18 14:06:15 crc kubenswrapper[4739]: I0218 14:06:15.339863 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podStartSLOduration=2.649585289 podStartE2EDuration="8.339105024s" podCreationTimestamp="2026-02-18 14:06:07 +0000 UTC" firstStartedPulling="2026-02-18 14:06:08.725793505 +0000 UTC m=+401.221514427" lastFinishedPulling="2026-02-18 14:06:14.41531324 +0000 UTC m=+406.911034162" observedRunningTime="2026-02-18 14:06:15.327471832 +0000 UTC m=+407.823192744" watchObservedRunningTime="2026-02-18 14:06:15.339105024 +0000 UTC m=+407.834825956" Feb 18 14:06:17 crc kubenswrapper[4739]: I0218 14:06:17.269111 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"6965db354ab966e13601a58de0203f89563ca80f5969237fb79d53cec016183d"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259362 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"8ac388f60ad587f76bc829b8147d223d8d2754cf343adbdbbd18054eb2a8cfd9"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259375 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"e64739d4eeff90f8dd89979ea950c5c58ba3adc6ba05687ccaaead8cc5dfd928"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"f95a56436fb67b08067185ab8a6e5fc004c22bad4e1d1da23f657959c48fad41"} Feb 18 14:06:18 crc kubenswrapper[4739]: I0218 14:06:18.259403 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"01164903cded0adf0fd45394d4abd75818beaa631631ae3bc0adb5fa40229910"} Feb 18 14:06:19 crc kubenswrapper[4739]: I0218 14:06:19.272782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"22142e4b-3aae-4317-a2e5-2ad225fb7473","Type":"ContainerStarted","Data":"33a653ef95267e3f16fa03f490e52fffaf95421b7b4abba12f9f5311f7e0aacd"} Feb 18 14:06:19 crc kubenswrapper[4739]: I0218 14:06:19.335106 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.975297029 podStartE2EDuration="8.335080662s" podCreationTimestamp="2026-02-18 14:06:11 +0000 UTC" firstStartedPulling="2026-02-18 14:06:13.198505716 +0000 UTC m=+405.694226638" lastFinishedPulling="2026-02-18 14:06:17.558289349 +0000 UTC m=+410.054010271" observedRunningTime="2026-02-18 14:06:19.329069341 +0000 UTC m=+411.824790303" watchObservedRunningTime="2026-02-18 14:06:19.335080662 +0000 UTC m=+411.830801614" Feb 18 14:06:20 crc kubenswrapper[4739]: I0218 14:06:20.531083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:20 crc kubenswrapper[4739]: I0218 14:06:20.531611 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:20 crc kubenswrapper[4739]: I0218 14:06:20.536552 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:21 crc kubenswrapper[4739]: I0218 14:06:21.292961 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:06:21 crc kubenswrapper[4739]: I0218 14:06:21.345582 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:06:22 crc kubenswrapper[4739]: I0218 14:06:22.051526 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:06:28 crc kubenswrapper[4739]: I0218 14:06:28.761213 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" containerID="cri-o://c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92" gracePeriod=30 Feb 18 14:06:29 crc kubenswrapper[4739]: I0218 14:06:29.340735 4739 generic.go:334] "Generic (PLEG): container finished" podID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerID="c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92" exitCode=0 Feb 18 14:06:29 crc kubenswrapper[4739]: I0218 14:06:29.340810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerDied","Data":"c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92"} Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.073611 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.110310 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.111136 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.155751 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156261 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156306 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156541 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156648 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.156757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.157345 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") pod \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\" (UID: \"42c00254-0b69-45d3-8dd6-7f2ee914d65d\") " Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.157990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.159406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.162085 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.163179 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.163639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc" (OuterVolumeSpecName: "kube-api-access-lr8zc") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "kube-api-access-lr8zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.172120 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.174943 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.189082 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "42c00254-0b69-45d3-8dd6-7f2ee914d65d" (UID: "42c00254-0b69-45d3-8dd6-7f2ee914d65d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259089 4739 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259138 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr8zc\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-kube-api-access-lr8zc\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259151 4739 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42c00254-0b69-45d3-8dd6-7f2ee914d65d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259159 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259168 4739 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42c00254-0b69-45d3-8dd6-7f2ee914d65d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259175 4739 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42c00254-0b69-45d3-8dd6-7f2ee914d65d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.259184 4739 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42c00254-0b69-45d3-8dd6-7f2ee914d65d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.360214 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.360285 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dqtnr" event={"ID":"42c00254-0b69-45d3-8dd6-7f2ee914d65d","Type":"ContainerDied","Data":"b96e22f2e4072131e39645eec1bdeb575f2e322af330e9ccff4e59c7655f9d27"} Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.360331 4739 scope.go:117] "RemoveContainer" containerID="c53d5a482db632b149d61954455c1b63897dc05aa1c7bf18271a0c5962e25f92" Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.400792 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:06:31 crc kubenswrapper[4739]: I0218 14:06:31.411892 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dqtnr"] Feb 18 14:06:32 crc kubenswrapper[4739]: I0218 14:06:32.424374 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" path="/var/lib/kubelet/pods/42c00254-0b69-45d3-8dd6-7f2ee914d65d/volumes" Feb 18 14:06:46 crc kubenswrapper[4739]: I0218 14:06:46.418018 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-r2dqq" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" containerID="cri-o://e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" gracePeriod=15 Feb 18 14:06:46 crc kubenswrapper[4739]: I0218 14:06:46.924373 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r2dqq_dcd69695-49d3-46a8-9981-b592c44e827e/console/0.log" Feb 18 14:06:46 crc kubenswrapper[4739]: I0218 14:06:46.924722 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005224 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005393 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005502 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.005675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") pod \"dcd69695-49d3-46a8-9981-b592c44e827e\" (UID: \"dcd69695-49d3-46a8-9981-b592c44e827e\") " Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006691 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006720 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca" (OuterVolumeSpecName: "service-ca") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.006827 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config" (OuterVolumeSpecName: "console-config") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.011374 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.011567 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt" (OuterVolumeSpecName: "kube-api-access-fvpnt") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "kube-api-access-fvpnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.012076 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dcd69695-49d3-46a8-9981-b592c44e827e" (UID: "dcd69695-49d3-46a8-9981-b592c44e827e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107604 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107640 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107655 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dcd69695-49d3-46a8-9981-b592c44e827e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107666 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107676 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107684 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dcd69695-49d3-46a8-9981-b592c44e827e-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.107696 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvpnt\" (UniqueName: \"kubernetes.io/projected/dcd69695-49d3-46a8-9981-b592c44e827e-kube-api-access-fvpnt\") on node \"crc\" DevicePath \"\"" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496040 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r2dqq_dcd69695-49d3-46a8-9981-b592c44e827e/console/0.log" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496141 4739 generic.go:334] "Generic (PLEG): container finished" podID="dcd69695-49d3-46a8-9981-b592c44e827e" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" exitCode=2 Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerDied","Data":"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27"} Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r2dqq" event={"ID":"dcd69695-49d3-46a8-9981-b592c44e827e","Type":"ContainerDied","Data":"521d0f76ee7d4a163d13b57cff922dcd0df4129aae7138664aa07df19279036a"} Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496292 4739 scope.go:117] "RemoveContainer" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.496645 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r2dqq" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.535124 4739 scope.go:117] "RemoveContainer" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" Feb 18 14:06:47 crc kubenswrapper[4739]: E0218 14:06:47.536170 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27\": container with ID starting with e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27 not found: ID does not exist" containerID="e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.536227 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27"} err="failed to get container status \"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27\": rpc error: code = NotFound desc = could not find container \"e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27\": container with ID starting with e8f23e28db7f4412e39190f87ebbe448d54c5e0d2f4cd4bcbe62e4bfde847c27 not found: ID does not exist" Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.540759 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:06:47 crc kubenswrapper[4739]: I0218 14:06:47.549330 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-r2dqq"] Feb 18 14:06:48 crc kubenswrapper[4739]: I0218 14:06:48.423532 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" path="/var/lib/kubelet/pods/dcd69695-49d3-46a8-9981-b592c44e827e/volumes" Feb 18 14:06:51 crc kubenswrapper[4739]: I0218 14:06:51.117393 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:06:51 crc kubenswrapper[4739]: I0218 14:06:51.126749 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 14:07:12 crc kubenswrapper[4739]: I0218 14:07:12.051249 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:07:12 crc kubenswrapper[4739]: I0218 14:07:12.089752 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:07:12 crc kubenswrapper[4739]: I0218 14:07:12.760022 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.792052 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:07:49 crc kubenswrapper[4739]: E0218 14:07:49.792948 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.792969 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" Feb 18 14:07:49 crc kubenswrapper[4739]: E0218 14:07:49.793002 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793011 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793157 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd69695-49d3-46a8-9981-b592c44e827e" containerName="console" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793185 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c00254-0b69-45d3-8dd6-7f2ee914d65d" containerName="registry" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.793747 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.805215 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981507 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.981746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.982038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.982090 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:49 crc kubenswrapper[4739]: I0218 14:07:49.982174 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082930 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082958 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.082984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.083023 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.083039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.083061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.084804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.086010 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.086136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.086544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.089709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.090627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.117635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"console-796648847c-cwj5j\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.412668 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.606853 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.993853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerStarted","Data":"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000"} Feb 18 14:07:50 crc kubenswrapper[4739]: I0218 14:07:50.993953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerStarted","Data":"ced41aeb18b143d7cb7b37389d8e7093c6f932a8b69ee8fd71755fd592dcd4fa"} Feb 18 14:07:51 crc kubenswrapper[4739]: I0218 14:07:51.018666 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-796648847c-cwj5j" podStartSLOduration=2.018635085 podStartE2EDuration="2.018635085s" podCreationTimestamp="2026-02-18 14:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:07:51.012141463 +0000 UTC m=+503.507862465" watchObservedRunningTime="2026-02-18 14:07:51.018635085 +0000 UTC m=+503.514356077" Feb 18 14:07:59 crc kubenswrapper[4739]: I0218 14:07:59.373237 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:07:59 crc kubenswrapper[4739]: I0218 14:07:59.373952 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.420871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.421197 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.421306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.425365 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:08:00 crc kubenswrapper[4739]: I0218 14:08:00.516729 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:08:25 crc kubenswrapper[4739]: I0218 14:08:25.574175 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-58d7d9b477-pcf5b" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" containerID="cri-o://80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" gracePeriod=15 Feb 18 14:08:25 crc kubenswrapper[4739]: I0218 14:08:25.957609 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d7d9b477-pcf5b_86a3de80-d2f2-4637-bebb-5944c22a2c83/console/0.log" Feb 18 14:08:25 crc kubenswrapper[4739]: I0218 14:08:25.957973 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047556 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047647 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047800 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047855 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047943 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.047978 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") pod \"86a3de80-d2f2-4637-bebb-5944c22a2c83\" (UID: \"86a3de80-d2f2-4637-bebb-5944c22a2c83\") " Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.048197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.048363 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.049009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca" (OuterVolumeSpecName: "service-ca") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.049035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.049579 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config" (OuterVolumeSpecName: "console-config") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.053692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5" (OuterVolumeSpecName: "kube-api-access-b6dt5") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "kube-api-access-b6dt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.054596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.054647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "86a3de80-d2f2-4637-bebb-5944c22a2c83" (UID: "86a3de80-d2f2-4637-bebb-5944c22a2c83"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150206 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150258 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6dt5\" (UniqueName: \"kubernetes.io/projected/86a3de80-d2f2-4637-bebb-5944c22a2c83-kube-api-access-b6dt5\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150282 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150303 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86a3de80-d2f2-4637-bebb-5944c22a2c83-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150322 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.150340 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86a3de80-d2f2-4637-bebb-5944c22a2c83-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251782 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58d7d9b477-pcf5b_86a3de80-d2f2-4637-bebb-5944c22a2c83/console/0.log" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251834 4739 generic.go:334] "Generic (PLEG): container finished" podID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" exitCode=2 Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251861 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerDied","Data":"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0"} Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251887 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-pcf5b" event={"ID":"86a3de80-d2f2-4637-bebb-5944c22a2c83","Type":"ContainerDied","Data":"be64644632065d655e0cde5e224a8ff692c5d059479e399bec230d76053c2d58"} Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251905 4739 scope.go:117] "RemoveContainer" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.251945 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-pcf5b" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.283875 4739 scope.go:117] "RemoveContainer" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" Feb 18 14:08:26 crc kubenswrapper[4739]: E0218 14:08:26.285236 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0\": container with ID starting with 80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0 not found: ID does not exist" containerID="80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.285333 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0"} err="failed to get container status \"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0\": rpc error: code = NotFound desc = could not find container \"80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0\": container with ID starting with 80f582585589f3644b159c913d69030ed0bcfb11197ee5eccc412fc26652d6b0 not found: ID does not exist" Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.301853 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.305588 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-58d7d9b477-pcf5b"] Feb 18 14:08:26 crc kubenswrapper[4739]: I0218 14:08:26.419996 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" path="/var/lib/kubelet/pods/86a3de80-d2f2-4637-bebb-5944c22a2c83/volumes" Feb 18 14:08:28 crc kubenswrapper[4739]: I0218 14:08:28.615205 4739 scope.go:117] "RemoveContainer" containerID="22ab4c4400803a84698f429676267f73d2f72204f8bfd5e8b8c44045eb32a01a" Feb 18 14:08:29 crc kubenswrapper[4739]: I0218 14:08:29.373153 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:08:29 crc kubenswrapper[4739]: I0218 14:08:29.373595 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.372557 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.373251 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.373312 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.376669 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:08:59 crc kubenswrapper[4739]: I0218 14:08:59.376968 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775" gracePeriod=600 Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505055 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775" exitCode=0 Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505160 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775"} Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939"} Feb 18 14:09:00 crc kubenswrapper[4739]: I0218 14:09:00.505647 4739 scope.go:117] "RemoveContainer" containerID="c14eacdda4998b85fc850cbe1ea7ad895d0fff56e3dad4f03ee87c5b35cfb8f6" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.364254 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8"] Feb 18 14:09:07 crc kubenswrapper[4739]: E0218 14:09:07.365141 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.365160 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.365297 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a3de80-d2f2-4637-bebb-5944c22a2c83" containerName="console" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.366287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.368989 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.383815 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8"] Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.410300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.410371 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.410478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511549 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.511974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.512046 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.531729 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.682337 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:07 crc kubenswrapper[4739]: I0218 14:09:07.897927 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8"] Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.557661 4739 generic.go:334] "Generic (PLEG): container finished" podID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerID="b45866b485a873e533217c5609dff01b7a1fbda5b6dd344d2f3f11bef95be4df" exitCode=0 Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.557733 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"b45866b485a873e533217c5609dff01b7a1fbda5b6dd344d2f3f11bef95be4df"} Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.557798 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerStarted","Data":"41cdf91f468feaa1446bfaac2c0029bfe52337049631873b866501ecff6dfa06"} Feb 18 14:09:08 crc kubenswrapper[4739]: I0218 14:09:08.559870 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:09:10 crc kubenswrapper[4739]: I0218 14:09:10.571240 4739 generic.go:334] "Generic (PLEG): container finished" podID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerID="2271379a199a89e7ff76a4a76d9c723a989b4feb61f0a0f5f17a7ee8b6115e19" exitCode=0 Feb 18 14:09:10 crc kubenswrapper[4739]: I0218 14:09:10.571518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"2271379a199a89e7ff76a4a76d9c723a989b4feb61f0a0f5f17a7ee8b6115e19"} Feb 18 14:09:11 crc kubenswrapper[4739]: I0218 14:09:11.584218 4739 generic.go:334] "Generic (PLEG): container finished" podID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerID="bf2d0b5b32f74e0b202e14eff82aa0195b1a6152ef8b92136c9f5d68b3ee0774" exitCode=0 Feb 18 14:09:11 crc kubenswrapper[4739]: I0218 14:09:11.584275 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"bf2d0b5b32f74e0b202e14eff82aa0195b1a6152ef8b92136c9f5d68b3ee0774"} Feb 18 14:09:12 crc kubenswrapper[4739]: I0218 14:09:12.885673 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.004599 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") pod \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.004669 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") pod \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.004721 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") pod \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\" (UID: \"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9\") " Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.010635 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp" (OuterVolumeSpecName: "kube-api-access-cn8wp") pod "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" (UID: "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9"). InnerVolumeSpecName "kube-api-access-cn8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.010930 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle" (OuterVolumeSpecName: "bundle") pod "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" (UID: "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.018572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util" (OuterVolumeSpecName: "util") pod "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" (UID: "8d944a4d-4b9c-43f2-be16-0f222b4cb0c9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.105808 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn8wp\" (UniqueName: \"kubernetes.io/projected/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-kube-api-access-cn8wp\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.106084 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.106096 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8d944a4d-4b9c-43f2-be16-0f222b4cb0c9-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.601882 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" event={"ID":"8d944a4d-4b9c-43f2-be16-0f222b4cb0c9","Type":"ContainerDied","Data":"41cdf91f468feaa1446bfaac2c0029bfe52337049631873b866501ecff6dfa06"} Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.601952 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41cdf91f468feaa1446bfaac2c0029bfe52337049631873b866501ecff6dfa06" Feb 18 14:09:13 crc kubenswrapper[4739]: I0218 14:09:13.601994 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8" Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.932591 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933387 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" containerID="cri-o://12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933681 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" containerID="cri-o://76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933690 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" containerID="cri-o://f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933742 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" containerID="cri-o://fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933751 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" containerID="cri-o://15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933712 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" containerID="cri-o://d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.933814 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" gracePeriod=30 Feb 18 14:09:18 crc kubenswrapper[4739]: I0218 14:09:18.966763 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" containerID="cri-o://54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" gracePeriod=30 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.645291 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovnkube-controller/3.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.648376 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-acl-logging/0.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.648994 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-controller/0.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649545 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649573 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649582 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649590 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" exitCode=0 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649599 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" exitCode=143 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649607 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" exitCode=143 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649629 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649721 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.649739 4739 scope.go:117] "RemoveContainer" containerID="cd4329e957291efef202b02b980bd6204928a5b0d86ed948a134aef54272c5ed" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652010 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/2.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652515 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/1.log" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652558 4739 generic.go:334] "Generic (PLEG): container finished" podID="ec8fd6de-f77b-48a7-848f-a1b94e866365" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" exitCode=2 Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.652589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerDied","Data":"d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1"} Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.653048 4739 scope.go:117] "RemoveContainer" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" Feb 18 14:09:19 crc kubenswrapper[4739]: E0218 14:09:19.653267 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h9slg_openshift-multus(ec8fd6de-f77b-48a7-848f-a1b94e866365)\"" pod="openshift-multus/multus-h9slg" podUID="ec8fd6de-f77b-48a7-848f-a1b94e866365" Feb 18 14:09:19 crc kubenswrapper[4739]: I0218 14:09:19.677766 4739 scope.go:117] "RemoveContainer" containerID="c7e57d4b3d2fa1999cedc5cef8c29dd528fa5f44c130854cb8f7dc0751a2ce67" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.219138 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-acl-logging/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.219560 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-controller/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.219910 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.274813 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njz85"] Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275179 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275204 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275223 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275238 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275253 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275266 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275283 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="util" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275295 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="util" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275310 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="pull" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275322 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="pull" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275340 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275351 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275376 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275389 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275411 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275423 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275436 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275452 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275493 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="extract" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275506 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="extract" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275523 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kubecfg-setup" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275535 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kubecfg-setup" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275551 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275563 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275595 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275608 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275620 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.275638 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275651 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275831 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275862 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-acl-logging" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275877 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="northd" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275897 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275911 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovn-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275923 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275937 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275953 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-node" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275971 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="sbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.275993 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="nbdb" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276011 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276027 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d944a4d-4b9c-43f2-be16-0f222b4cb0c9" containerName="extract" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.276239 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276261 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.276428 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerName="ovnkube-controller" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.279054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403147 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403222 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403236 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403265 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403309 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403329 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403347 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403361 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403378 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403398 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403414 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403439 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403471 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403503 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") pod \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\" (UID: \"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224\") " Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403619 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-log-socket\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403639 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-env-overrides\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovn-node-metrics-cert\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403712 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-config\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403739 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-node-log\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403757 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-kubelet\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-etc-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-script-lib\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-ovn\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403838 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-var-lib-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403874 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-bin\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-slash\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-systemd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-netns\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-netd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j7kd\" (UniqueName: \"kubernetes.io/projected/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-kube-api-access-9j7kd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.403982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-systemd-units\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404075 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket" (OuterVolumeSpecName: "log-socket") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404097 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404113 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404145 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.404161 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405116 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log" (OuterVolumeSpecName: "node-log") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash" (OuterVolumeSpecName: "host-slash") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405268 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405263 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405493 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405639 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.405672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.409893 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n" (OuterVolumeSpecName: "kube-api-access-dtd5n") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "kube-api-access-dtd5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.419349 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.419704 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" (UID: "f04e1fa3-4bb9-41e9-bf1d-a2862fb63224"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504893 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-netns\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-netd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-systemd-units\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.504996 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j7kd\" (UniqueName: \"kubernetes.io/projected/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-kube-api-access-9j7kd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-log-socket\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-netd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505084 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-systemd-units\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-env-overrides\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505202 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovn-node-metrics-cert\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505340 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-config\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-kubelet\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505405 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-node-log\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505429 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-etc-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-script-lib\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505603 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-ovn\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-env-overrides\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505664 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-netns\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-log-socket\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-node-log\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.505768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-kubelet\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506159 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-etc-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506205 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-config\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506242 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506277 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-ovn\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506350 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-run-ovn-kubernetes\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506531 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-var-lib-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-var-lib-openvswitch\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-bin\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506621 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-slash\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506653 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-systemd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-slash\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-host-cni-bin\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506860 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-run-systemd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506881 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovnkube-script-lib\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.506893 4739 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507080 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507142 4739 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507202 4739 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507261 4739 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507317 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtd5n\" (UniqueName: \"kubernetes.io/projected/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-kube-api-access-dtd5n\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507368 4739 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507423 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507491 4739 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507556 4739 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507608 4739 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507662 4739 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507715 4739 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507767 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507819 4739 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507873 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507924 4739 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.507973 4739 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.508025 4739 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.508077 4739 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.511038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-ovn-node-metrics-cert\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.523294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j7kd\" (UniqueName: \"kubernetes.io/projected/7e037260-564c-4a0e-bfd4-f5452ccd7e5b-kube-api-access-9j7kd\") pod \"ovnkube-node-njz85\" (UID: \"7e037260-564c-4a0e-bfd4-f5452ccd7e5b\") " pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.590801 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.700342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"3f5bb4b788270d83cf1ae7e041c7cf11a02a5fd2aa5c9b8f5840253f4687d109"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.705396 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-acl-logging/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706035 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-x4j94_f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/ovn-controller/0.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706630 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" exitCode=0 Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706713 4739 generic.go:334] "Generic (PLEG): container finished" podID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" exitCode=0 Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.706859 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707370 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707459 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-x4j94" event={"ID":"f04e1fa3-4bb9-41e9-bf1d-a2862fb63224","Type":"ContainerDied","Data":"994cdd394e91062d3bf50c4eb1ba16a7ab9c2957bfb870b8f9ecfcf4d7fc50a5"} Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.707483 4739 scope.go:117] "RemoveContainer" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.715603 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/2.log" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.739771 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.740673 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.741397 4739 scope.go:117] "RemoveContainer" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.743757 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.743919 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.758325 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.758855 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qwkkp" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.780418 4739 scope.go:117] "RemoveContainer" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.792822 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-x4j94"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.798384 4739 scope.go:117] "RemoveContainer" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.813692 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htg7c\" (UniqueName: \"kubernetes.io/projected/ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc-kube-api-access-htg7c\") pod \"obo-prometheus-operator-68bc856cb9-c9tcc\" (UID: \"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.827613 4739 scope.go:117] "RemoveContainer" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.856171 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.864154 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.868763 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-4rltn" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.868965 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.870564 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.871658 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.893920 4739 scope.go:117] "RemoveContainer" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.909704 4739 scope.go:117] "RemoveContainer" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.914978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htg7c\" (UniqueName: \"kubernetes.io/projected/ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc-kube-api-access-htg7c\") pod \"obo-prometheus-operator-68bc856cb9-c9tcc\" (UID: \"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.915210 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.928572 4739 scope.go:117] "RemoveContainer" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.935350 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htg7c\" (UniqueName: \"kubernetes.io/projected/ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc-kube-api-access-htg7c\") pod \"obo-prometheus-operator-68bc856cb9-c9tcc\" (UID: \"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.954628 4739 scope.go:117] "RemoveContainer" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.970590 4739 scope.go:117] "RemoveContainer" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.970961 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": container with ID starting with 54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8 not found: ID does not exist" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971002 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} err="failed to get container status \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": rpc error: code = NotFound desc = could not find container \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": container with ID starting with 54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971036 4739 scope.go:117] "RemoveContainer" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.971311 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": container with ID starting with 76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34 not found: ID does not exist" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971340 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} err="failed to get container status \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": rpc error: code = NotFound desc = could not find container \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": container with ID starting with 76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971360 4739 scope.go:117] "RemoveContainer" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.971851 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mqkqw"] Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.972526 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.973757 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": container with ID starting with d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216 not found: ID does not exist" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.973796 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} err="failed to get container status \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": rpc error: code = NotFound desc = could not find container \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": container with ID starting with d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.973856 4739 scope.go:117] "RemoveContainer" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.974130 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": container with ID starting with f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334 not found: ID does not exist" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974161 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} err="failed to get container status \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": rpc error: code = NotFound desc = could not find container \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": container with ID starting with f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974178 4739 scope.go:117] "RemoveContainer" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.974420 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": container with ID starting with 212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e not found: ID does not exist" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974481 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} err="failed to get container status \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": rpc error: code = NotFound desc = could not find container \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": container with ID starting with 212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974506 4739 scope.go:117] "RemoveContainer" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.974929 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-z95ts" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975086 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.975169 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": container with ID starting with 15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41 not found: ID does not exist" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975193 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} err="failed to get container status \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": rpc error: code = NotFound desc = could not find container \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": container with ID starting with 15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975208 4739 scope.go:117] "RemoveContainer" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.975598 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": container with ID starting with fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552 not found: ID does not exist" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975618 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} err="failed to get container status \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": rpc error: code = NotFound desc = could not find container \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": container with ID starting with fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975630 4739 scope.go:117] "RemoveContainer" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.975850 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": container with ID starting with 12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8 not found: ID does not exist" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975870 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} err="failed to get container status \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": rpc error: code = NotFound desc = could not find container \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": container with ID starting with 12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.975881 4739 scope.go:117] "RemoveContainer" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: E0218 14:09:20.976074 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": container with ID starting with bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7 not found: ID does not exist" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976101 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7"} err="failed to get container status \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": rpc error: code = NotFound desc = could not find container \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": container with ID starting with bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976118 4739 scope.go:117] "RemoveContainer" containerID="54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976310 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8"} err="failed to get container status \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": rpc error: code = NotFound desc = could not find container \"54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8\": container with ID starting with 54f1ff2dae8299c00ec3d9d415009641cfa77f5870f06536cd36656e1dbd92f8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976326 4739 scope.go:117] "RemoveContainer" containerID="76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976563 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34"} err="failed to get container status \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": rpc error: code = NotFound desc = could not find container \"76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34\": container with ID starting with 76a546261883c299830539852582b82f4712ce2be63f28b0bc682b302a4f4f34 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976591 4739 scope.go:117] "RemoveContainer" containerID="d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976768 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216"} err="failed to get container status \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": rpc error: code = NotFound desc = could not find container \"d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216\": container with ID starting with d26b427c3c739e2f6e9d94e35351256df17447461a85092487cf8c9a937ae216 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.976785 4739 scope.go:117] "RemoveContainer" containerID="f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977027 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334"} err="failed to get container status \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": rpc error: code = NotFound desc = could not find container \"f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334\": container with ID starting with f9d857cafc79b7f3c8474e4635c9ceabbcbfc77646b2c6d00ddce10df19bf334 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977047 4739 scope.go:117] "RemoveContainer" containerID="212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977329 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e"} err="failed to get container status \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": rpc error: code = NotFound desc = could not find container \"212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e\": container with ID starting with 212bffa88e146fba17c82a760558a159b4b2458d58d7a1aa1a428eb0f63bed6e not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977349 4739 scope.go:117] "RemoveContainer" containerID="15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977606 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41"} err="failed to get container status \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": rpc error: code = NotFound desc = could not find container \"15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41\": container with ID starting with 15b9f1010f41fb7b9dca303a2d42ebdcb3311feea320c74fd87b0963a4667a41 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977626 4739 scope.go:117] "RemoveContainer" containerID="fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977813 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552"} err="failed to get container status \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": rpc error: code = NotFound desc = could not find container \"fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552\": container with ID starting with fe3b10fbc1ec25a84c3758ee103a1e3efd1aa78dce9ee27289f85b95bf191552 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.977830 4739 scope.go:117] "RemoveContainer" containerID="12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.978080 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8"} err="failed to get container status \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": rpc error: code = NotFound desc = could not find container \"12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8\": container with ID starting with 12d1b9266b463baab574875b1e0b724387e2783ed1baf949b8896a3ef1b9f3a8 not found: ID does not exist" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.978101 4739 scope.go:117] "RemoveContainer" containerID="bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7" Feb 18 14:09:20 crc kubenswrapper[4739]: I0218 14:09:20.978294 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7"} err="failed to get container status \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": rpc error: code = NotFound desc = could not find container \"bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7\": container with ID starting with bdf854f4f339299d2b62050129877d2bea203bc63e5dbeb01726c6ebeb496de7 not found: ID does not exist" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015734 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015870 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0348c042-11c0-4a27-a8d4-04beea8e11a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.015911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpz9w\" (UniqueName: \"kubernetes.io/projected/0348c042-11c0-4a27-a8d4-04beea8e11a3-kube-api-access-xpz9w\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018709 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018686 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d337f75-bb26-461d-9519-f17c333cfc55-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6\" (UID: \"3d337f75-bb26-461d-9519-f17c333cfc55\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.018841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e257eada-747c-4c16-ade0-64120ce08e5b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h\" (UID: \"e257eada-747c-4c16-ade0-64120ce08e5b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.060333 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lpf5k"] Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.061109 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.065442 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-k7x6s" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.087990 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0348c042-11c0-4a27-a8d4-04beea8e11a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117489 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpz9w\" (UniqueName: \"kubernetes.io/projected/0348c042-11c0-4a27-a8d4-04beea8e11a3-kube-api-access-xpz9w\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.117588 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9f2\" (UniqueName: \"kubernetes.io/projected/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-kube-api-access-nw9f2\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120556 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120785 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120823 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.120889 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(f05be0938706833fbc0743c46db4bf246ef03c44b5f93b6e433e07a7ab66e795): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podUID="ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.122157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0348c042-11c0-4a27-a8d4-04beea8e11a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.137281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpz9w\" (UniqueName: \"kubernetes.io/projected/0348c042-11c0-4a27-a8d4-04beea8e11a3-kube-api-access-xpz9w\") pod \"observability-operator-59bdc8b94-mqkqw\" (UID: \"0348c042-11c0-4a27-a8d4-04beea8e11a3\") " pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.209692 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.218475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9f2\" (UniqueName: \"kubernetes.io/projected/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-kube-api-access-nw9f2\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.218600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.219707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.228045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234517 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234604 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234626 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.234670 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(176335bd8c350bde1afbe3ecd3ae094b4895547f330f5fd64845cb0fb9ccb4a4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podUID="e257eada-747c-4c16-ade0-64120ce08e5b" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.236101 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9f2\" (UniqueName: \"kubernetes.io/projected/2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe-kube-api-access-nw9f2\") pod \"perses-operator-5bf474d74f-lpf5k\" (UID: \"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe\") " pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275503 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275621 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275649 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.275725 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(c4e2a98429a4df784d4991abeb98e2d0167df5c5196ad7b5464718cf13d5ec5c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podUID="3d337f75-bb26-461d-9519-f17c333cfc55" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.292746 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313028 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313083 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313105 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.313148 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(0b315c6e3e4d3da9b0dcb8122f0e682be850db2b60815210079a8d5c59180f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.481295 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510251 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510631 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510686 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:21 crc kubenswrapper[4739]: E0218 14:09:21.510731 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5adb7f999566b2f506b685e91e3395da380bb13ad75d368a4481b82ecc1a27ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.724909 4739 generic.go:334] "Generic (PLEG): container finished" podID="7e037260-564c-4a0e-bfd4-f5452ccd7e5b" containerID="63139a00520ccb495ae7aeb05b4ec94cbc4f0702737ff09ce59721f657efee35" exitCode=0 Feb 18 14:09:21 crc kubenswrapper[4739]: I0218 14:09:21.725118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerDied","Data":"63139a00520ccb495ae7aeb05b4ec94cbc4f0702737ff09ce59721f657efee35"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.419058 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04e1fa3-4bb9-41e9-bf1d-a2862fb63224" path="/var/lib/kubelet/pods/f04e1fa3-4bb9-41e9-bf1d-a2862fb63224/volumes" Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736693 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"65d2fab975fca85e33a1bd10769b030be3b635df185632f3c2c951c0583f2071"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736756 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"5573dd0cbf8bff48419d39b0da563d531642df77e89e7eb6890ad393d1e1f695"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736773 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"85e1b3d352c8763e335723cf2f2fb986e3c5cbaee36472135cad3b3ef5a339f8"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736785 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"bc3d244581c25b68aa9399475fd20be97da0ca767ceeb76714d0a9d6aaf6bff4"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"2a70e718a39cb88b5ae82e8d4003a1f04906d0c1143ac825bcec0ef96dbf1451"} Feb 18 14:09:22 crc kubenswrapper[4739]: I0218 14:09:22.736808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"2c3fb374c31063e49b3fb92b705754f567c66974927fe56f22d53f9bf399f656"} Feb 18 14:09:24 crc kubenswrapper[4739]: I0218 14:09:24.753508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"feabe2a78254db093534d4eae996c0b083567faaf789ca5e4af8127006774819"} Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.744634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lpf5k"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.745358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.745934 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.749959 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.750103 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.750730 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.770688 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.770818 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.771328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.778851 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.778973 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.779418 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.788345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mqkqw"] Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.788499 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.788943 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.790436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" event={"ID":"7e037260-564c-4a0e-bfd4-f5452ccd7e5b","Type":"ContainerStarted","Data":"c1fc48d05342165ee0f4db047aae45eb1984ae0609a6eb9066c46db384e7972d"} Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.791741 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.791778 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.791854 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.791935 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.791972 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.791997 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.792040 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(638dca07e566adf2a525ec36fe83eca6e7d2f2e6bccaafdc0842f4000e9ed730): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podUID="e257eada-747c-4c16-ade0-64120ce08e5b" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.835917 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.836239 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.836262 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.836301 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(9be6e32803400cca9685da5ba410475825de1006efa7c15a23565570f414617d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.837118 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" podStartSLOduration=7.837100194 podStartE2EDuration="7.837100194s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:09:27.83615779 +0000 UTC m=+600.331878732" watchObservedRunningTime="2026-02-18 14:09:27.837100194 +0000 UTC m=+600.332821106" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.846739 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: I0218 14:09:27.852995 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859084 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859138 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859178 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.859224 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(cc29f6f5df374cf2db83f5207506adb9788796ba11fe9c5c5c352f5e1850f8cc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podUID="3d337f75-bb26-461d-9519-f17c333cfc55" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867643 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867723 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867752 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.867800 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(0731d138580baef97d7bbb5ee9dbb974f549f8b461b3e12c7a4193988427e302): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podUID="ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881625 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881685 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881707 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:27 crc kubenswrapper[4739]: E0218 14:09:27.881746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(9dcccb5051222fe47da8d71d7fa5560cbce1f133bf61ddfe24643cddaed03722): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" Feb 18 14:09:28 crc kubenswrapper[4739]: I0218 14:09:28.661028 4739 scope.go:117] "RemoveContainer" containerID="4e07a94ec0847b4e99755ab2a06cb038c67fb9badd5a1660eeebdbdd132f59cc" Feb 18 14:09:28 crc kubenswrapper[4739]: I0218 14:09:28.684817 4739 scope.go:117] "RemoveContainer" containerID="b2a60f4fb9b49f347db21a50c2097f9a1a95de43e825543cb9badb0925f33d62" Feb 18 14:09:32 crc kubenswrapper[4739]: I0218 14:09:32.410946 4739 scope.go:117] "RemoveContainer" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" Feb 18 14:09:32 crc kubenswrapper[4739]: E0218 14:09:32.411406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-h9slg_openshift-multus(ec8fd6de-f77b-48a7-848f-a1b94e866365)\"" pod="openshift-multus/multus-h9slg" podUID="ec8fd6de-f77b-48a7-848f-a1b94e866365" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410207 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410280 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.410784 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.411252 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: I0218 14:09:41.411538 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.471515 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.472396 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.472609 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.472803 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-mqkqw_openshift-operators(0348c042-11c0-4a27-a8d4-04beea8e11a3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-mqkqw_openshift-operators_0348c042-11c0-4a27-a8d4-04beea8e11a3_0(47d994e566875d21e446e410dfb659ff06f8970898e8adf5f64f15b9c437cc20): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480667 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480728 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480750 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.480793 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-lpf5k_openshift-operators(2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-lpf5k_openshift-operators_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe_0(5d2adc2a3222dac4109e62701e575bc0fef1eec021914bca503b01a302a5d294): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486243 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486306 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486330 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:41 crc kubenswrapper[4739]: E0218 14:09:41.486372 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators(e257eada-747c-4c16-ade0-64120ce08e5b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_openshift-operators_e257eada-747c-4c16-ade0-64120ce08e5b_0(918d5611dd8e72c8323e68ec7ad3841de484eed9514cb6e426c9f562ef95d118): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podUID="e257eada-747c-4c16-ade0-64120ce08e5b" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.409680 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.410011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.410492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: I0218 14:09:42.410978 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462753 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462837 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462863 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.462928 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators(ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-c9tcc_openshift-operators_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc_0(7981ff9b0d20958f7e97511b52917feff04b6d482461b771fce93ca6d0444954): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podUID="ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.470844 4739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.470943 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.470984 4739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:42 crc kubenswrapper[4739]: E0218 14:09:42.471070 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators(3d337f75-bb26-461d-9519-f17c333cfc55)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_openshift-operators_3d337f75-bb26-461d-9519-f17c333cfc55_0(a52a0f11b151fd2fb523fc2cbc8c104a91f9244b893ec4dd1ec1f4a3ea5501cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podUID="3d337f75-bb26-461d-9519-f17c333cfc55" Feb 18 14:09:43 crc kubenswrapper[4739]: I0218 14:09:43.410106 4739 scope.go:117] "RemoveContainer" containerID="d2933eda9affe42ab15a0347bde54987f36d532b9d62d4495588205b777d7ff1" Feb 18 14:09:43 crc kubenswrapper[4739]: I0218 14:09:43.875543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h9slg_ec8fd6de-f77b-48a7-848f-a1b94e866365/kube-multus/2.log" Feb 18 14:09:43 crc kubenswrapper[4739]: I0218 14:09:43.875842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h9slg" event={"ID":"ec8fd6de-f77b-48a7-848f-a1b94e866365","Type":"ContainerStarted","Data":"3624dca3884a0e7f68dae865e9e5bdd570950f415bd75d4d1b9e008103284e71"} Feb 18 14:09:50 crc kubenswrapper[4739]: I0218 14:09:50.618133 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.410421 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.411483 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.673235 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-mqkqw"] Feb 18 14:09:52 crc kubenswrapper[4739]: W0218 14:09:52.696688 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0348c042_11c0_4a27_a8d4_04beea8e11a3.slice/crio-d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b WatchSource:0}: Error finding container d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b: Status 404 returned error can't find the container with id d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b Feb 18 14:09:52 crc kubenswrapper[4739]: I0218 14:09:52.930931 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerStarted","Data":"d70eb6530267e3e32e9164c834f56a5baa48338aa801f030c172de20dadd064b"} Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.412065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.413078 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.616021 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6"] Feb 18 14:09:54 crc kubenswrapper[4739]: I0218 14:09:54.943806 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" event={"ID":"3d337f75-bb26-461d-9519-f17c333cfc55","Type":"ContainerStarted","Data":"1257cfb8743d893ac8a100f8f8ecec53b7388a050916c37a5de6b793fb6d0158"} Feb 18 14:09:55 crc kubenswrapper[4739]: I0218 14:09:55.410127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:55 crc kubenswrapper[4739]: I0218 14:09:55.410661 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.409991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.410210 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.410985 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.411175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:09:56 crc kubenswrapper[4739]: I0218 14:09:56.705961 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h"] Feb 18 14:09:59 crc kubenswrapper[4739]: W0218 14:09:59.038852 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode257eada_747c_4c16_ade0_64120ce08e5b.slice/crio-2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b WatchSource:0}: Error finding container 2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b: Status 404 returned error can't find the container with id 2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.526328 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-lpf5k"] Feb 18 14:09:59 crc kubenswrapper[4739]: W0218 14:09:59.538126 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a79887e_1b6d_44ed_b3e1_f1c7c65b48fe.slice/crio-22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc WatchSource:0}: Error finding container 22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc: Status 404 returned error can't find the container with id 22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.765528 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc"] Feb 18 14:09:59 crc kubenswrapper[4739]: W0218 14:09:59.771805 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef4587aa_49cd_4fd3_a5e6_05b0b5139cbc.slice/crio-b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd WatchSource:0}: Error finding container b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd: Status 404 returned error can't find the container with id b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.981533 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" event={"ID":"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc","Type":"ContainerStarted","Data":"b5b877966b50500010c195706836080e516d31e5e0a98ebc14b73d5dcfcbc2dd"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.983988 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" event={"ID":"3d337f75-bb26-461d-9519-f17c333cfc55","Type":"ContainerStarted","Data":"867a1d1e35e96f2de0846410e776a8707b5f70b60e12991ebf4a39c25a659674"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.985652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" event={"ID":"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe","Type":"ContainerStarted","Data":"22b40a1cdab48f8c31449d12a6c0db4cc6afb040e9225381374ddb502c35d8bc"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.987779 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" event={"ID":"e257eada-747c-4c16-ade0-64120ce08e5b","Type":"ContainerStarted","Data":"1976a300df6104a78e1c3fc23c067d495200b7e4dda5fded82016791e4d53d0a"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.987806 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" event={"ID":"e257eada-747c-4c16-ade0-64120ce08e5b","Type":"ContainerStarted","Data":"2311bc94fbb15d64e07451f005a4c474ddfbe166c23ab8c9166e94f29bed2d2b"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.989145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerStarted","Data":"f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9"} Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.989414 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:09:59 crc kubenswrapper[4739]: I0218 14:09:59.992364 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 14:10:00 crc kubenswrapper[4739]: I0218 14:10:00.002818 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-49bj6" podStartSLOduration=35.355427238 podStartE2EDuration="40.002796284s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:54.626910742 +0000 UTC m=+627.122631664" lastFinishedPulling="2026-02-18 14:09:59.274279778 +0000 UTC m=+631.770000710" observedRunningTime="2026-02-18 14:10:00.000118556 +0000 UTC m=+632.495839488" watchObservedRunningTime="2026-02-18 14:10:00.002796284 +0000 UTC m=+632.498517206" Feb 18 14:10:00 crc kubenswrapper[4739]: I0218 14:10:00.034618 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podStartSLOduration=33.455022331 podStartE2EDuration="40.03457471s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:52.698575286 +0000 UTC m=+625.194296218" lastFinishedPulling="2026-02-18 14:09:59.278127675 +0000 UTC m=+631.773848597" observedRunningTime="2026-02-18 14:10:00.024048916 +0000 UTC m=+632.519769828" watchObservedRunningTime="2026-02-18 14:10:00.03457471 +0000 UTC m=+632.530295642" Feb 18 14:10:00 crc kubenswrapper[4739]: I0218 14:10:00.094616 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-547f5ff-7mn2h" podStartSLOduration=39.329611275 podStartE2EDuration="40.094596435s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:59.046583039 +0000 UTC m=+631.542304001" lastFinishedPulling="2026-02-18 14:09:59.811568239 +0000 UTC m=+632.307289161" observedRunningTime="2026-02-18 14:10:00.084944433 +0000 UTC m=+632.580665375" watchObservedRunningTime="2026-02-18 14:10:00.094596435 +0000 UTC m=+632.590317367" Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.028037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" event={"ID":"ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc","Type":"ContainerStarted","Data":"ffbf001d0c53a44567dce50cda8fd6397bcd2dc12b09ba9b03b313a22e2ec453"} Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.031967 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" event={"ID":"2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe","Type":"ContainerStarted","Data":"5937df856d1d46847539665e65a3d6d8ab68c8c20f66dc465922025398c42662"} Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.032158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.061619 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-c9tcc" podStartSLOduration=40.640615494 podStartE2EDuration="43.061560571s" podCreationTimestamp="2026-02-18 14:09:20 +0000 UTC" firstStartedPulling="2026-02-18 14:09:59.774675664 +0000 UTC m=+632.270396586" lastFinishedPulling="2026-02-18 14:10:02.195620741 +0000 UTC m=+634.691341663" observedRunningTime="2026-02-18 14:10:03.053730905 +0000 UTC m=+635.549451847" watchObservedRunningTime="2026-02-18 14:10:03.061560571 +0000 UTC m=+635.557281493" Feb 18 14:10:03 crc kubenswrapper[4739]: I0218 14:10:03.091121 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podStartSLOduration=39.437711008 podStartE2EDuration="42.091099642s" podCreationTimestamp="2026-02-18 14:09:21 +0000 UTC" firstStartedPulling="2026-02-18 14:09:59.540346859 +0000 UTC m=+632.036067781" lastFinishedPulling="2026-02-18 14:10:02.193735493 +0000 UTC m=+634.689456415" observedRunningTime="2026-02-18 14:10:03.087767028 +0000 UTC m=+635.583487960" watchObservedRunningTime="2026-02-18 14:10:03.091099642 +0000 UTC m=+635.586820584" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.211426 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.212523 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.221517 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.224436 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.229115 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.229301 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4m87c" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.256595 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-927qr"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.257633 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.259841 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jt56x" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.264696 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-bfgbz"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.265696 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.267219 4739 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wsx9r" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.270581 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bfgbz"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.282537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-927qr"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.317678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9tc\" (UniqueName: \"kubernetes.io/projected/09228bff-e02a-4a38-86ab-3d18492c3fa1-kube-api-access-sm9tc\") pod \"cert-manager-cainjector-cf98fcc89-xl5rj\" (UID: \"09228bff-e02a-4a38-86ab-3d18492c3fa1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.317740 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfrq\" (UniqueName: \"kubernetes.io/projected/c9731232-5945-414d-bf7c-cd9207130675-kube-api-access-8rfrq\") pod \"cert-manager-webhook-687f57d79b-927qr\" (UID: \"c9731232-5945-414d-bf7c-cd9207130675\") " pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.421156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9tc\" (UniqueName: \"kubernetes.io/projected/09228bff-e02a-4a38-86ab-3d18492c3fa1-kube-api-access-sm9tc\") pod \"cert-manager-cainjector-cf98fcc89-xl5rj\" (UID: \"09228bff-e02a-4a38-86ab-3d18492c3fa1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.421216 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfrq\" (UniqueName: \"kubernetes.io/projected/c9731232-5945-414d-bf7c-cd9207130675-kube-api-access-8rfrq\") pod \"cert-manager-webhook-687f57d79b-927qr\" (UID: \"c9731232-5945-414d-bf7c-cd9207130675\") " pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.421297 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhf4\" (UniqueName: \"kubernetes.io/projected/4a1588a0-096b-4e77-b251-f034a57c7a04-kube-api-access-9mhf4\") pod \"cert-manager-858654f9db-bfgbz\" (UID: \"4a1588a0-096b-4e77-b251-f034a57c7a04\") " pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.441859 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9tc\" (UniqueName: \"kubernetes.io/projected/09228bff-e02a-4a38-86ab-3d18492c3fa1-kube-api-access-sm9tc\") pod \"cert-manager-cainjector-cf98fcc89-xl5rj\" (UID: \"09228bff-e02a-4a38-86ab-3d18492c3fa1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.442799 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfrq\" (UniqueName: \"kubernetes.io/projected/c9731232-5945-414d-bf7c-cd9207130675-kube-api-access-8rfrq\") pod \"cert-manager-webhook-687f57d79b-927qr\" (UID: \"c9731232-5945-414d-bf7c-cd9207130675\") " pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.522764 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhf4\" (UniqueName: \"kubernetes.io/projected/4a1588a0-096b-4e77-b251-f034a57c7a04-kube-api-access-9mhf4\") pod \"cert-manager-858654f9db-bfgbz\" (UID: \"4a1588a0-096b-4e77-b251-f034a57c7a04\") " pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.533383 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.540247 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhf4\" (UniqueName: \"kubernetes.io/projected/4a1588a0-096b-4e77-b251-f034a57c7a04-kube-api-access-9mhf4\") pod \"cert-manager-858654f9db-bfgbz\" (UID: \"4a1588a0-096b-4e77-b251-f034a57c7a04\") " pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.578618 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.585131 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-bfgbz" Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.854193 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-927qr"] Feb 18 14:10:06 crc kubenswrapper[4739]: I0218 14:10:06.966844 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-bfgbz"] Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.012974 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj"] Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.055326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" event={"ID":"c9731232-5945-414d-bf7c-cd9207130675","Type":"ContainerStarted","Data":"e8ec9501c5e7763f5c3f27ab80dac6d138f48b683f62abbca0c8100d78544cbd"} Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.056524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" event={"ID":"09228bff-e02a-4a38-86ab-3d18492c3fa1","Type":"ContainerStarted","Data":"6fb5ad80aa567c9077b9b91bc5fe45863465870ed7866c56608c71c2238f40b3"} Feb 18 14:10:07 crc kubenswrapper[4739]: I0218 14:10:07.058303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bfgbz" event={"ID":"4a1588a0-096b-4e77-b251-f034a57c7a04","Type":"ContainerStarted","Data":"8a047c5d92b1fe401904b44746047144260d54c4c478b996c15d16a3109f6001"} Feb 18 14:10:11 crc kubenswrapper[4739]: I0218 14:10:11.485993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.121314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" event={"ID":"09228bff-e02a-4a38-86ab-3d18492c3fa1","Type":"ContainerStarted","Data":"c2275996e9f713a6c4de0d6ebd364787512e90c3d51c849d0ba8ffc2f4983898"} Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.125576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-bfgbz" event={"ID":"4a1588a0-096b-4e77-b251-f034a57c7a04","Type":"ContainerStarted","Data":"3d3c9533ad06560c7aaea5d94681fc805ee8303be163b909088c5ebdafba4680"} Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.134314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" event={"ID":"c9731232-5945-414d-bf7c-cd9207130675","Type":"ContainerStarted","Data":"16180aad5ff17f9442ca809b4bcdcc1d9cfba2a73e4951b86d5a99f948a79c0f"} Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.134683 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.138909 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xl5rj" podStartSLOduration=1.809481468 podStartE2EDuration="6.138891892s" podCreationTimestamp="2026-02-18 14:10:06 +0000 UTC" firstStartedPulling="2026-02-18 14:10:07.021623996 +0000 UTC m=+639.517344918" lastFinishedPulling="2026-02-18 14:10:11.35103438 +0000 UTC m=+643.846755342" observedRunningTime="2026-02-18 14:10:12.135929718 +0000 UTC m=+644.631650640" watchObservedRunningTime="2026-02-18 14:10:12.138891892 +0000 UTC m=+644.634612814" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.155350 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podStartSLOduration=1.632294804 podStartE2EDuration="6.155327654s" podCreationTimestamp="2026-02-18 14:10:06 +0000 UTC" firstStartedPulling="2026-02-18 14:10:06.851183222 +0000 UTC m=+639.346904144" lastFinishedPulling="2026-02-18 14:10:11.374216072 +0000 UTC m=+643.869936994" observedRunningTime="2026-02-18 14:10:12.15114036 +0000 UTC m=+644.646861282" watchObservedRunningTime="2026-02-18 14:10:12.155327654 +0000 UTC m=+644.651048576" Feb 18 14:10:12 crc kubenswrapper[4739]: I0218 14:10:12.177572 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-bfgbz" podStartSLOduration=1.791082047 podStartE2EDuration="6.177547912s" podCreationTimestamp="2026-02-18 14:10:06 +0000 UTC" firstStartedPulling="2026-02-18 14:10:06.973890829 +0000 UTC m=+639.469611751" lastFinishedPulling="2026-02-18 14:10:11.360356694 +0000 UTC m=+643.856077616" observedRunningTime="2026-02-18 14:10:12.172223168 +0000 UTC m=+644.667944110" watchObservedRunningTime="2026-02-18 14:10:12.177547912 +0000 UTC m=+644.673268844" Feb 18 14:10:16 crc kubenswrapper[4739]: I0218 14:10:16.582026 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.460359 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.462344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.466271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.516008 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.532402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.532631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.532852 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.634572 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.634632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.634715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.635580 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.635601 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.659266 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.660640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.673658 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.675913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d"] Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.735353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.735498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.735549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.779246 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836217 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836313 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836407 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.836915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.837043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.854750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:38 crc kubenswrapper[4739]: I0218 14:10:38.976387 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7"] Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.021897 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.316054 4739 generic.go:334] "Generic (PLEG): container finished" podID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerID="454dd61bdd62d47407b56f447566ea9f22fe341c25dc7ed14dcd3d120b9b8069" exitCode=0 Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.316093 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"454dd61bdd62d47407b56f447566ea9f22fe341c25dc7ed14dcd3d120b9b8069"} Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.316113 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerStarted","Data":"7a9b013ed906a0613589ae3d66b4884c8d5e7a76e9c7fed840daceab37832d7b"} Feb 18 14:10:39 crc kubenswrapper[4739]: I0218 14:10:39.340812 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d"] Feb 18 14:10:39 crc kubenswrapper[4739]: W0218 14:10:39.350487 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod517d6503_525a_420f_b4e7_1732df952bd4.slice/crio-cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4 WatchSource:0}: Error finding container cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4: Status 404 returned error can't find the container with id cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4 Feb 18 14:10:40 crc kubenswrapper[4739]: I0218 14:10:40.327307 4739 generic.go:334] "Generic (PLEG): container finished" podID="517d6503-525a-420f-b4e7-1732df952bd4" containerID="92ccbd04e73a399a1b2acade5f0fc2fe3436deea52b89df59639b4cccf3974e0" exitCode=0 Feb 18 14:10:40 crc kubenswrapper[4739]: I0218 14:10:40.327473 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"92ccbd04e73a399a1b2acade5f0fc2fe3436deea52b89df59639b4cccf3974e0"} Feb 18 14:10:40 crc kubenswrapper[4739]: I0218 14:10:40.327849 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerStarted","Data":"cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4"} Feb 18 14:10:41 crc kubenswrapper[4739]: I0218 14:10:41.335787 4739 generic.go:334] "Generic (PLEG): container finished" podID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerID="f6ee98b5c21b1150f8da5d85e1aaf52dc0fcb1a34dee8bd3ae7600a84cb97958" exitCode=0 Feb 18 14:10:41 crc kubenswrapper[4739]: I0218 14:10:41.336119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"f6ee98b5c21b1150f8da5d85e1aaf52dc0fcb1a34dee8bd3ae7600a84cb97958"} Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.346863 4739 generic.go:334] "Generic (PLEG): container finished" podID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerID="33f9eaf41663c6cdc1fa6c161746dc2c97457c8e8624b5d58df79594ba4e8321" exitCode=0 Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.347013 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"33f9eaf41663c6cdc1fa6c161746dc2c97457c8e8624b5d58df79594ba4e8321"} Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.349022 4739 generic.go:334] "Generic (PLEG): container finished" podID="517d6503-525a-420f-b4e7-1732df952bd4" containerID="35bd051d69fe2a278c91886fad39204d35c0233eac46781159cd57033adb0c4b" exitCode=0 Feb 18 14:10:42 crc kubenswrapper[4739]: I0218 14:10:42.349079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"35bd051d69fe2a278c91886fad39204d35c0233eac46781159cd57033adb0c4b"} Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.361093 4739 generic.go:334] "Generic (PLEG): container finished" podID="517d6503-525a-420f-b4e7-1732df952bd4" containerID="0c9676f5b9f1ebc76195364908897f7a73a2564143e65de0de125703c8cdc208" exitCode=0 Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.361195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"0c9676f5b9f1ebc76195364908897f7a73a2564143e65de0de125703c8cdc208"} Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.584126 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.706932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") pod \"4fece5bf-a118-4158-9879-3b4ca9e751af\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.707010 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") pod \"4fece5bf-a118-4158-9879-3b4ca9e751af\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.707055 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") pod \"4fece5bf-a118-4158-9879-3b4ca9e751af\" (UID: \"4fece5bf-a118-4158-9879-3b4ca9e751af\") " Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.707981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle" (OuterVolumeSpecName: "bundle") pod "4fece5bf-a118-4158-9879-3b4ca9e751af" (UID: "4fece5bf-a118-4158-9879-3b4ca9e751af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.716684 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz" (OuterVolumeSpecName: "kube-api-access-pldlz") pod "4fece5bf-a118-4158-9879-3b4ca9e751af" (UID: "4fece5bf-a118-4158-9879-3b4ca9e751af"). InnerVolumeSpecName "kube-api-access-pldlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.721360 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util" (OuterVolumeSpecName: "util") pod "4fece5bf-a118-4158-9879-3b4ca9e751af" (UID: "4fece5bf-a118-4158-9879-3b4ca9e751af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.809214 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.809254 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pldlz\" (UniqueName: \"kubernetes.io/projected/4fece5bf-a118-4158-9879-3b4ca9e751af-kube-api-access-pldlz\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:43 crc kubenswrapper[4739]: I0218 14:10:43.809272 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fece5bf-a118-4158-9879-3b4ca9e751af-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.370516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.370540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7" event={"ID":"4fece5bf-a118-4158-9879-3b4ca9e751af","Type":"ContainerDied","Data":"7a9b013ed906a0613589ae3d66b4884c8d5e7a76e9c7fed840daceab37832d7b"} Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.370595 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a9b013ed906a0613589ae3d66b4884c8d5e7a76e9c7fed840daceab37832d7b" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.624783 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.825511 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") pod \"517d6503-525a-420f-b4e7-1732df952bd4\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.825594 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") pod \"517d6503-525a-420f-b4e7-1732df952bd4\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.825653 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") pod \"517d6503-525a-420f-b4e7-1732df952bd4\" (UID: \"517d6503-525a-420f-b4e7-1732df952bd4\") " Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.826970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle" (OuterVolumeSpecName: "bundle") pod "517d6503-525a-420f-b4e7-1732df952bd4" (UID: "517d6503-525a-420f-b4e7-1732df952bd4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.834546 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs" (OuterVolumeSpecName: "kube-api-access-xfcqs") pod "517d6503-525a-420f-b4e7-1732df952bd4" (UID: "517d6503-525a-420f-b4e7-1732df952bd4"). InnerVolumeSpecName "kube-api-access-xfcqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.928038 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:44 crc kubenswrapper[4739]: I0218 14:10:44.928118 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfcqs\" (UniqueName: \"kubernetes.io/projected/517d6503-525a-420f-b4e7-1732df952bd4-kube-api-access-xfcqs\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.305294 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util" (OuterVolumeSpecName: "util") pod "517d6503-525a-420f-b4e7-1732df952bd4" (UID: "517d6503-525a-420f-b4e7-1732df952bd4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.332328 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/517d6503-525a-420f-b4e7-1732df952bd4-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.380283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" event={"ID":"517d6503-525a-420f-b4e7-1732df952bd4","Type":"ContainerDied","Data":"cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4"} Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.380961 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cccc6a4ec1fc1c1e1a2b52a670a9f2122e452ca04c1359babe64aad0548ae4f4" Feb 18 14:10:45 crc kubenswrapper[4739]: I0218 14:10:45.380373 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.302862 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-54nln"] Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303747 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303764 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303785 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303794 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303819 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303828 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303841 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303848 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="util" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303859 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303866 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="pull" Feb 18 14:10:48 crc kubenswrapper[4739]: E0218 14:10:48.303880 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.303888 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.304026 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="517d6503-525a-420f-b4e7-1732df952bd4" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.304051 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fece5bf-a118-4158-9879-3b4ca9e751af" containerName="extract" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.304644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.308384 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-zst2v" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.309697 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.313353 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.323200 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-54nln"] Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.477436 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxwnb\" (UniqueName: \"kubernetes.io/projected/4b0da132-982d-47b8-ae8a-d0529fbfe6a4-kube-api-access-pxwnb\") pod \"cluster-logging-operator-c769fd969-54nln\" (UID: \"4b0da132-982d-47b8-ae8a-d0529fbfe6a4\") " pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.579265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxwnb\" (UniqueName: \"kubernetes.io/projected/4b0da132-982d-47b8-ae8a-d0529fbfe6a4-kube-api-access-pxwnb\") pod \"cluster-logging-operator-c769fd969-54nln\" (UID: \"4b0da132-982d-47b8-ae8a-d0529fbfe6a4\") " pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.598738 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxwnb\" (UniqueName: \"kubernetes.io/projected/4b0da132-982d-47b8-ae8a-d0529fbfe6a4-kube-api-access-pxwnb\") pod \"cluster-logging-operator-c769fd969-54nln\" (UID: \"4b0da132-982d-47b8-ae8a-d0529fbfe6a4\") " pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.630585 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" Feb 18 14:10:48 crc kubenswrapper[4739]: I0218 14:10:48.840308 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-54nln"] Feb 18 14:10:49 crc kubenswrapper[4739]: I0218 14:10:49.402608 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" event={"ID":"4b0da132-982d-47b8-ae8a-d0529fbfe6a4","Type":"ContainerStarted","Data":"42175b195f358ba914182304ce6e0ebffb25d3923adf31dee1bd3f7a30ecb776"} Feb 18 14:10:55 crc kubenswrapper[4739]: I0218 14:10:55.466310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" event={"ID":"4b0da132-982d-47b8-ae8a-d0529fbfe6a4","Type":"ContainerStarted","Data":"bf4ea001ea1dd847baae03b4ae85e964ad985d7b1ab8c3f7b8c94526d33c5d60"} Feb 18 14:10:59 crc kubenswrapper[4739]: I0218 14:10:59.373335 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:10:59 crc kubenswrapper[4739]: I0218 14:10:59.373710 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.025331 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-54nln" podStartSLOduration=6.58989109 podStartE2EDuration="12.02530912s" podCreationTimestamp="2026-02-18 14:10:48 +0000 UTC" firstStartedPulling="2026-02-18 14:10:48.858362043 +0000 UTC m=+681.354082965" lastFinishedPulling="2026-02-18 14:10:54.293780073 +0000 UTC m=+686.789500995" observedRunningTime="2026-02-18 14:10:55.506698557 +0000 UTC m=+688.002419479" watchObservedRunningTime="2026-02-18 14:11:00.02530912 +0000 UTC m=+692.521030062" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.029377 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw"] Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.030720 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.033905 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.038954 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.039200 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.039212 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.039231 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.041782 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-7w974" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.048280 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw"] Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155379 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-apiservice-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155485 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87brw\" (UniqueName: \"kubernetes.io/projected/4091e4df-be25-4e94-bf12-7079a8ce9b5f-kube-api-access-87brw\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155514 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4091e4df-be25-4e94-bf12-7079a8ce9b5f-manager-config\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.155590 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-webhook-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-apiservice-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256682 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87brw\" (UniqueName: \"kubernetes.io/projected/4091e4df-be25-4e94-bf12-7079a8ce9b5f-kube-api-access-87brw\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256718 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4091e4df-be25-4e94-bf12-7079a8ce9b5f-manager-config\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.256832 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-webhook-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.257913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4091e4df-be25-4e94-bf12-7079a8ce9b5f-manager-config\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.265456 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-webhook-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.268978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.273113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4091e4df-be25-4e94-bf12-7079a8ce9b5f-apiservice-cert\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.291312 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87brw\" (UniqueName: \"kubernetes.io/projected/4091e4df-be25-4e94-bf12-7079a8ce9b5f-kube-api-access-87brw\") pod \"loki-operator-controller-manager-7c7d667b45-kx8bw\" (UID: \"4091e4df-be25-4e94-bf12-7079a8ce9b5f\") " pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.349358 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:00 crc kubenswrapper[4739]: I0218 14:11:00.616615 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw"] Feb 18 14:11:00 crc kubenswrapper[4739]: W0218 14:11:00.633565 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4091e4df_be25_4e94_bf12_7079a8ce9b5f.slice/crio-f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4 WatchSource:0}: Error finding container f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4: Status 404 returned error can't find the container with id f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4 Feb 18 14:11:01 crc kubenswrapper[4739]: I0218 14:11:01.511831 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"f537dad184c3d4800f0eabfb3f0317ab642b6b80f1ee57369f01784ece1f01e4"} Feb 18 14:11:03 crc kubenswrapper[4739]: I0218 14:11:03.528438 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad"} Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.561493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"0f5a58e0edf17e924bc5e9579db08cf06cfce905915b2baf102218a6b7254d1c"} Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.562085 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.563642 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 14:11:08 crc kubenswrapper[4739]: I0218 14:11:08.579979 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podStartSLOduration=0.971276919 podStartE2EDuration="8.579959872s" podCreationTimestamp="2026-02-18 14:11:00 +0000 UTC" firstStartedPulling="2026-02-18 14:11:00.635878906 +0000 UTC m=+693.131599828" lastFinishedPulling="2026-02-18 14:11:08.244561859 +0000 UTC m=+700.740282781" observedRunningTime="2026-02-18 14:11:08.577951151 +0000 UTC m=+701.073672083" watchObservedRunningTime="2026-02-18 14:11:08.579959872 +0000 UTC m=+701.075680804" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.946303 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.947798 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.950270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-4jt7w" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.950303 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.951037 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 18 14:11:11 crc kubenswrapper[4739]: I0218 14:11:11.957131 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.048670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.048773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrrwc\" (UniqueName: \"kubernetes.io/projected/8b37d199-1cb8-410c-af45-c6a181f5a5fa-kube-api-access-xrrwc\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.149848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.149956 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrrwc\" (UniqueName: \"kubernetes.io/projected/8b37d199-1cb8-410c-af45-c6a181f5a5fa-kube-api-access-xrrwc\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.153993 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.154039 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1f6744619e96528fe550f20b0a6efc84d44207a81495198471d6a685eafc85c/globalmount\"" pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.169333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrrwc\" (UniqueName: \"kubernetes.io/projected/8b37d199-1cb8-410c-af45-c6a181f5a5fa-kube-api-access-xrrwc\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.185818 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdf7f37f-0342-40ee-99a4-e09417d53512\") pod \"minio\" (UID: \"8b37d199-1cb8-410c-af45-c6a181f5a5fa\") " pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.263888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 14:11:12 crc kubenswrapper[4739]: I0218 14:11:12.761587 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 14:11:12 crc kubenswrapper[4739]: W0218 14:11:12.762588 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b37d199_1cb8_410c_af45_c6a181f5a5fa.slice/crio-13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4 WatchSource:0}: Error finding container 13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4: Status 404 returned error can't find the container with id 13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4 Feb 18 14:11:13 crc kubenswrapper[4739]: I0218 14:11:13.594327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8b37d199-1cb8-410c-af45-c6a181f5a5fa","Type":"ContainerStarted","Data":"13ebb1ad7f1c943b119e48f059d5afe49b508e142ad00be5e2e272a8d9c512f4"} Feb 18 14:11:16 crc kubenswrapper[4739]: I0218 14:11:16.618524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8b37d199-1cb8-410c-af45-c6a181f5a5fa","Type":"ContainerStarted","Data":"294dfc4ba6866c3948399e099df856aa7445e88fbe4a1b126bef321ebd56a7a7"} Feb 18 14:11:16 crc kubenswrapper[4739]: I0218 14:11:16.642104 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.888085358 podStartE2EDuration="7.642082118s" podCreationTimestamp="2026-02-18 14:11:09 +0000 UTC" firstStartedPulling="2026-02-18 14:11:12.764976197 +0000 UTC m=+705.260697119" lastFinishedPulling="2026-02-18 14:11:15.518972917 +0000 UTC m=+708.014693879" observedRunningTime="2026-02-18 14:11:16.636369655 +0000 UTC m=+709.132090587" watchObservedRunningTime="2026-02-18 14:11:16.642082118 +0000 UTC m=+709.137803050" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.829880 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x"] Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.831100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.833650 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-76dw2" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.835462 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.835742 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.835789 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.838914 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.874622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x"] Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.916878 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcgtv\" (UniqueName: \"kubernetes.io/projected/d2537052-1467-4892-afe4-cafbbdfbd645-kube-api-access-jcgtv\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.916950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.917115 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.917183 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:21 crc kubenswrapper[4739]: I0218 14:11:21.917258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-config\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019814 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-config\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.019866 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcgtv\" (UniqueName: \"kubernetes.io/projected/d2537052-1467-4892-afe4-cafbbdfbd645-kube-api-access-jcgtv\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.020405 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.021006 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-config\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.021258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.025327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.029103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d2537052-1467-4892-afe4-cafbbdfbd645-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.058049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcgtv\" (UniqueName: \"kubernetes.io/projected/d2537052-1467-4892-afe4-cafbbdfbd645-kube-api-access-jcgtv\") pod \"logging-loki-distributor-5d5548c9f5-68g9x\" (UID: \"d2537052-1467-4892-afe4-cafbbdfbd645\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.099983 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.100831 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.106767 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.107073 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.107265 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.118190 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.148733 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224172 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhl7g\" (UniqueName: \"kubernetes.io/projected/3886312a-0449-43cc-b914-a4633b2c7e80-kube-api-access-jhl7g\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224223 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224252 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.224339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-config\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.254211 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.255027 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.264068 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.264660 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.269575 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326777 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326832 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326863 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-config\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jf5\" (UniqueName: \"kubernetes.io/projected/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-kube-api-access-x9jf5\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326954 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-config\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.326981 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.327016 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhl7g\" (UniqueName: \"kubernetes.io/projected/3886312a-0449-43cc-b914-a4633b2c7e80-kube-api-access-jhl7g\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.327055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.327086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.328329 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-config\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.328341 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.332665 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.333219 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.353043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3886312a-0449-43cc-b914-a4633b2c7e80-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.368925 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhl7g\" (UniqueName: \"kubernetes.io/projected/3886312a-0449-43cc-b914-a4633b2c7e80-kube-api-access-jhl7g\") pod \"logging-loki-querier-76bf7b6d45-ccsmg\" (UID: \"3886312a-0449-43cc-b914-a4633b2c7e80\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.404060 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.405490 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.407467 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-vkjm2" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.407685 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.407982 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.408083 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.413087 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.414013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.414206 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.414393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429307 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jf5\" (UniqueName: \"kubernetes.io/projected/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-kube-api-access-x9jf5\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-config\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.429435 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.430149 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.431727 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-config\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.432287 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.435506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.440595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.447616 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.459368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jf5\" (UniqueName: \"kubernetes.io/projected/f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b-kube-api-access-x9jf5\") pod \"logging-loki-query-frontend-6d6859c548-grbnx\" (UID: \"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.473509 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532276 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532383 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkww7\" (UniqueName: \"kubernetes.io/projected/717b73b9-8190-41ce-8513-eb314a37cdfd-kube-api-access-tkww7\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532638 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tenants\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-rbac\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tenants\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532918 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.532962 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-rbac\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533000 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57sw\" (UniqueName: \"kubernetes.io/projected/82d2d64c-4971-48ee-a75c-30adadf054de-kube-api-access-r57sw\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.533155 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.595889 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.635026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.635164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-rbac\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.635874 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r57sw\" (UniqueName: \"kubernetes.io/projected/82d2d64c-4971-48ee-a75c-30adadf054de-kube-api-access-r57sw\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.636275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.636871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-rbac\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.637074 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.637166 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.637984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638054 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkww7\" (UniqueName: \"kubernetes.io/projected/717b73b9-8190-41ce-8513-eb314a37cdfd-kube-api-access-tkww7\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638380 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tenants\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-rbac\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.638540 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tenants\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.639435 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.639717 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.640195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.640532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/82d2d64c-4971-48ee-a75c-30adadf054de-lokistack-gateway\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.640745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-rbac\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.641046 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.651204 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.651404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.657243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tls-secret\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.659956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r57sw\" (UniqueName: \"kubernetes.io/projected/82d2d64c-4971-48ee-a75c-30adadf054de-kube-api-access-r57sw\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.659977 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/82d2d64c-4971-48ee-a75c-30adadf054de-tenants\") pod \"logging-loki-gateway-5f9bf547f9-whgjq\" (UID: \"82d2d64c-4971-48ee-a75c-30adadf054de\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.660077 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkww7\" (UniqueName: \"kubernetes.io/projected/717b73b9-8190-41ce-8513-eb314a37cdfd-kube-api-access-tkww7\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.660691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.660994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/717b73b9-8190-41ce-8513-eb314a37cdfd-tenants\") pod \"logging-loki-gateway-5f9bf547f9-nd7jd\" (UID: \"717b73b9-8190-41ce-8513-eb314a37cdfd\") " pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.731379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x"] Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.739298 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.750920 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:22 crc kubenswrapper[4739]: I0218 14:11:22.849397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.045126 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.047376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.049983 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.050050 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.065471 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.092334 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx"] Feb 18 14:11:23 crc kubenswrapper[4739]: W0218 14:11:23.095647 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6ad99a5_d1e9_44a4_bf58_b2085ac14b4b.slice/crio-4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a WatchSource:0}: Error finding container 4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a: Status 404 returned error can't find the container with id 4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.146840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147201 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147247 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-config\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147406 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvdjj\" (UniqueName: \"kubernetes.io/projected/bfabc0be-78aa-4cf2-ae16-6d226b95be03-kube-api-access-tvdjj\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.147494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.148432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.244879 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.245976 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252667 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-config\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252834 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvdjj\" (UniqueName: \"kubernetes.io/projected/bfabc0be-78aa-4cf2-ae16-6d226b95be03-kube-api-access-tvdjj\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.252944 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.253934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.253985 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfabc0be-78aa-4cf2-ae16-6d226b95be03-config\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.255970 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256007 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0d7302d0c57022864d95ac85d3cb8f35f2dea7518adab428ee5cc729a54e0531/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256154 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256199 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/feaa171f0b6cd5799412ead8e4699e27ef427e9053f38aab2316903ddf25c100/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.256474 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.257098 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.257708 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.264343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.264996 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.269740 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/bfabc0be-78aa-4cf2-ae16-6d226b95be03-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.309311 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ffed82ef-7033-4c50-804d-4a14f53884a8\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.310384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvdjj\" (UniqueName: \"kubernetes.io/projected/bfabc0be-78aa-4cf2-ae16-6d226b95be03-kube-api-access-tvdjj\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.334710 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.336185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-793619b8-d623-45aa-8547-e98e12f38d21\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-793619b8-d623-45aa-8547-e98e12f38d21\") pod \"logging-loki-ingester-0\" (UID: \"bfabc0be-78aa-4cf2-ae16-6d226b95be03\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.340489 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.341369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.343344 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.343548 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.346427 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354360 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354394 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354462 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vfv\" (UniqueName: \"kubernetes.io/projected/8cadd086-3e21-4dfc-9577-356fdcfe83c1-kube-api-access-g9vfv\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.354710 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-config\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.434390 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.456486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.456575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.456670 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngmb\" (UniqueName: \"kubernetes.io/projected/d13e1961-45de-4db2-a4cb-04c91c7b18ad-kube-api-access-bngmb\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.457542 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.457859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458364 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458401 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458507 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vfv\" (UniqueName: \"kubernetes.io/projected/8cadd086-3e21-4dfc-9577-356fdcfe83c1-kube-api-access-g9vfv\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-config\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.458970 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459010 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459145 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-config\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.459976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.460248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.462645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.463273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8cadd086-3e21-4dfc-9577-356fdcfe83c1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.466527 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.466562 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4f570ac13e6df09a348188bf3c99db79ba6c613f72b2b42e103e60173cad3d99/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.476978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vfv\" (UniqueName: \"kubernetes.io/projected/8cadd086-3e21-4dfc-9577-356fdcfe83c1-kube-api-access-g9vfv\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.494708 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7688f8c1-6203-4159-b750-ced415be7cb7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7688f8c1-6203-4159-b750-ced415be7cb7\") pod \"logging-loki-compactor-0\" (UID: \"8cadd086-3e21-4dfc-9577-356fdcfe83c1\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561488 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561515 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.561545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.563689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bngmb\" (UniqueName: \"kubernetes.io/projected/d13e1961-45de-4db2-a4cb-04c91c7b18ad-kube-api-access-bngmb\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.563776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.563897 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.565954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-config\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.569090 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.569724 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.571041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.576517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/d13e1961-45de-4db2-a4cb-04c91c7b18ad-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.577107 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.577145 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/428cbf6e2a5f0931e5d37164ac9f8d8b697e2569180b7a3024f745d84b571d37/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.589802 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bngmb\" (UniqueName: \"kubernetes.io/projected/d13e1961-45de-4db2-a4cb-04c91c7b18ad-kube-api-access-bngmb\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.610605 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7dec5aa2-4fae-4a33-bb9d-c7430b1044f5\") pod \"logging-loki-index-gateway-0\" (UID: \"d13e1961-45de-4db2-a4cb-04c91c7b18ad\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.634858 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.655803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.669436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" event={"ID":"717b73b9-8190-41ce-8513-eb314a37cdfd","Type":"ContainerStarted","Data":"15a21824e8b86ea716e5809907ecdbbaf9bdcc39c94a5ebb2a9ed68ceaa32dce"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.671159 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" event={"ID":"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b","Type":"ContainerStarted","Data":"4726516971c8db75ca1326737a6cb1e5f9f0dd76e834195cd1f87ed4cc4c206a"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.672804 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" event={"ID":"3886312a-0449-43cc-b914-a4633b2c7e80","Type":"ContainerStarted","Data":"3f8b07b6c419042850f5d2c44ac297c9341d063362c7ebec6608864417e05afe"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.674073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" event={"ID":"82d2d64c-4971-48ee-a75c-30adadf054de","Type":"ContainerStarted","Data":"ab421dbd157c4995ee2ace6842f59330eb01ddd1add15ca5ab520079d40c2d32"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.676165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" event={"ID":"d2537052-1467-4892-afe4-cafbbdfbd645","Type":"ContainerStarted","Data":"3d57e17da6f6b63f4935ba3674cb6af753fcbd06b47234c7cf156e2844c22d0d"} Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.841475 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: W0218 14:11:23.855590 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfabc0be_78aa_4cf2_ae16_6d226b95be03.slice/crio-04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa WatchSource:0}: Error finding container 04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa: Status 404 returned error can't find the container with id 04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa Feb 18 14:11:23 crc kubenswrapper[4739]: I0218 14:11:23.876238 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 14:11:23 crc kubenswrapper[4739]: W0218 14:11:23.883400 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cadd086_3e21_4dfc_9577_356fdcfe83c1.slice/crio-ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02 WatchSource:0}: Error finding container ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02: Status 404 returned error can't find the container with id ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02 Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.144055 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 14:11:24 crc kubenswrapper[4739]: W0218 14:11:24.147857 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd13e1961_45de_4db2_a4cb_04c91c7b18ad.slice/crio-3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c WatchSource:0}: Error finding container 3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c: Status 404 returned error can't find the container with id 3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.688074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8cadd086-3e21-4dfc-9577-356fdcfe83c1","Type":"ContainerStarted","Data":"ec9c53529d08c879b5bc8ff8111bd83f9f58d2b7634fa0d6318e65efacf17d02"} Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.689842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d13e1961-45de-4db2-a4cb-04c91c7b18ad","Type":"ContainerStarted","Data":"3d17c81b23ff01dee538d0fc06e19283ec9348614258e80cd5462e0fbdb7947c"} Feb 18 14:11:24 crc kubenswrapper[4739]: I0218 14:11:24.690901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"bfabc0be-78aa-4cf2-ae16-6d226b95be03","Type":"ContainerStarted","Data":"04744e574c40db5196b3499df64c518a9e54a0329e3ebd298a5afdee099222fa"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.718405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"bfabc0be-78aa-4cf2-ae16-6d226b95be03","Type":"ContainerStarted","Data":"00f15253fceac7920379827392ab362285e548cac1b9d9ea99fd11eb8a1cd32e"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.719504 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.721524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" event={"ID":"3886312a-0449-43cc-b914-a4633b2c7e80","Type":"ContainerStarted","Data":"7381d1d23b8d64918b8e9f22e68927268dab429d1c85352847348957fce0e46a"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.721641 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.723499 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8cadd086-3e21-4dfc-9577-356fdcfe83c1","Type":"ContainerStarted","Data":"fd6665c203067679e160dad8384fb0adc38b55320b64f990ae2b1fe6368bb00a"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.723994 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.724914 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"d13e1961-45de-4db2-a4cb-04c91c7b18ad","Type":"ContainerStarted","Data":"c8016b50df8e7d5202238a1b97a3c4a719a6605afe6a0cfb8d168a1e6ddeb215"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.725602 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.727012 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" event={"ID":"82d2d64c-4971-48ee-a75c-30adadf054de","Type":"ContainerStarted","Data":"f00db3955efbc3e250bd1c83a5d608b42978648715859357e9255dc3ec695a6f"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.728839 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" event={"ID":"d2537052-1467-4892-afe4-cafbbdfbd645","Type":"ContainerStarted","Data":"edadc01b8674abed17f814e13f5f06aa4b70cbd3b8b2ecdc0f076d0b2f9144cf"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.729007 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.730584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" event={"ID":"717b73b9-8190-41ce-8513-eb314a37cdfd","Type":"ContainerStarted","Data":"b02d91b1e269c801a1c546132c606efaf1c5c70268928a72a09b5b15ae12b22d"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.732186 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" event={"ID":"f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b","Type":"ContainerStarted","Data":"1ddc7dc066063733f48c303a27576fdabb8b5830d73bf262400021ced70c8369"} Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.732358 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.742500 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.389066495 podStartE2EDuration="7.7424849s" podCreationTimestamp="2026-02-18 14:11:21 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.870009096 +0000 UTC m=+716.365730008" lastFinishedPulling="2026-02-18 14:11:28.223427491 +0000 UTC m=+720.719148413" observedRunningTime="2026-02-18 14:11:28.736806188 +0000 UTC m=+721.232527130" watchObservedRunningTime="2026-02-18 14:11:28.7424849 +0000 UTC m=+721.238205822" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.760521 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.6833621020000002 podStartE2EDuration="6.760506297s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:24.151134567 +0000 UTC m=+716.646855479" lastFinishedPulling="2026-02-18 14:11:28.228278752 +0000 UTC m=+720.723999674" observedRunningTime="2026-02-18 14:11:28.758761004 +0000 UTC m=+721.254481926" watchObservedRunningTime="2026-02-18 14:11:28.760506297 +0000 UTC m=+721.256227219" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.791582 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podStartSLOduration=2.306986597 podStartE2EDuration="7.791567068s" podCreationTimestamp="2026-02-18 14:11:21 +0000 UTC" firstStartedPulling="2026-02-18 14:11:22.739804744 +0000 UTC m=+715.235525666" lastFinishedPulling="2026-02-18 14:11:28.224385215 +0000 UTC m=+720.720106137" observedRunningTime="2026-02-18 14:11:28.787289872 +0000 UTC m=+721.283010804" watchObservedRunningTime="2026-02-18 14:11:28.791567068 +0000 UTC m=+721.287287990" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.809846 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podStartSLOduration=1.7629017679999999 podStartE2EDuration="6.809829732s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.098705776 +0000 UTC m=+715.594426698" lastFinishedPulling="2026-02-18 14:11:28.14563374 +0000 UTC m=+720.641354662" observedRunningTime="2026-02-18 14:11:28.80574095 +0000 UTC m=+721.301461872" watchObservedRunningTime="2026-02-18 14:11:28.809829732 +0000 UTC m=+721.305550654" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.827128 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podStartSLOduration=1.473390399 podStartE2EDuration="6.827109171s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:22.869386811 +0000 UTC m=+715.365107733" lastFinishedPulling="2026-02-18 14:11:28.223105583 +0000 UTC m=+720.718826505" observedRunningTime="2026-02-18 14:11:28.821350248 +0000 UTC m=+721.317071160" watchObservedRunningTime="2026-02-18 14:11:28.827109171 +0000 UTC m=+721.322830093" Feb 18 14:11:28 crc kubenswrapper[4739]: I0218 14:11:28.840468 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.499322572 podStartE2EDuration="6.840427911s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.885784818 +0000 UTC m=+716.381505740" lastFinishedPulling="2026-02-18 14:11:28.226890157 +0000 UTC m=+720.722611079" observedRunningTime="2026-02-18 14:11:28.838632157 +0000 UTC m=+721.334353069" watchObservedRunningTime="2026-02-18 14:11:28.840427911 +0000 UTC m=+721.336148833" Feb 18 14:11:29 crc kubenswrapper[4739]: I0218 14:11:29.372603 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:11:29 crc kubenswrapper[4739]: I0218 14:11:29.372686 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.751374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" event={"ID":"717b73b9-8190-41ce-8513-eb314a37cdfd","Type":"ContainerStarted","Data":"3742d5b78014809eaa56cf845ee9ae4816d365c82219869773ab5acbcde93dfc"} Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.751834 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.751845 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.757244 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" event={"ID":"82d2d64c-4971-48ee-a75c-30adadf054de","Type":"ContainerStarted","Data":"ff8a9dfc4df6268def608077d571f8fb0f116e21ba2fd64008e4d7e87caa8782"} Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.766124 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.769272 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.789137 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podStartSLOduration=1.695161985 podStartE2EDuration="8.789101156s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.316312709 +0000 UTC m=+715.812033631" lastFinishedPulling="2026-02-18 14:11:30.41025188 +0000 UTC m=+722.905972802" observedRunningTime="2026-02-18 14:11:30.772063973 +0000 UTC m=+723.267784905" watchObservedRunningTime="2026-02-18 14:11:30.789101156 +0000 UTC m=+723.284822118" Feb 18 14:11:30 crc kubenswrapper[4739]: I0218 14:11:30.854873 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podStartSLOduration=1.603985851 podStartE2EDuration="8.854854869s" podCreationTimestamp="2026-02-18 14:11:22 +0000 UTC" firstStartedPulling="2026-02-18 14:11:23.165607547 +0000 UTC m=+715.661328479" lastFinishedPulling="2026-02-18 14:11:30.416476575 +0000 UTC m=+722.912197497" observedRunningTime="2026-02-18 14:11:30.843258321 +0000 UTC m=+723.338979263" watchObservedRunningTime="2026-02-18 14:11:30.854854869 +0000 UTC m=+723.350575791" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.766345 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.766426 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.779311 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:31 crc kubenswrapper[4739]: I0218 14:11:31.783860 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.442930 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.443337 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.679837 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 18 14:11:43 crc kubenswrapper[4739]: I0218 14:11:43.683772 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 14:11:52 crc kubenswrapper[4739]: I0218 14:11:52.158807 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 14:11:52 crc kubenswrapper[4739]: I0218 14:11:52.440926 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 14:11:52 crc kubenswrapper[4739]: I0218 14:11:52.602607 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 14:11:53 crc kubenswrapper[4739]: I0218 14:11:53.440669 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 18 14:11:53 crc kubenswrapper[4739]: I0218 14:11:53.440734 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.372514 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373084 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373170 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373866 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.373924 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939" gracePeriod=600 Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.971765 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939" exitCode=0 Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.971829 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939"} Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.972044 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41"} Feb 18 14:11:59 crc kubenswrapper[4739]: I0218 14:11:59.972064 4739 scope.go:117] "RemoveContainer" containerID="e5125cf77dc88adc47d4e5b3a55e6110798f0702d937bab37daf1e38919e0775" Feb 18 14:12:01 crc kubenswrapper[4739]: I0218 14:12:01.872562 4739 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 14:12:03 crc kubenswrapper[4739]: I0218 14:12:03.439279 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 14:12:03 crc kubenswrapper[4739]: I0218 14:12:03.439572 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:12:13 crc kubenswrapper[4739]: I0218 14:12:13.439000 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 14:12:13 crc kubenswrapper[4739]: I0218 14:12:13.439297 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:12:23 crc kubenswrapper[4739]: I0218 14:12:23.442492 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.095284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.096678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105438 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105770 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zpmx2" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105825 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105884 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.105779 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.120799 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.129774 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.165244 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:41 crc kubenswrapper[4739]: E0218 14:12:41.165784 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-s5dgm metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-rhjbv" podUID="aa1b5b42-cc82-48f9-9cf8-9da8994d5199" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.243983 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244008 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244639 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.244966 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.276967 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.284372 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347384 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347636 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347668 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.347797 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: E0218 14:12:41.347880 4739 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Feb 18 14:12:41 crc kubenswrapper[4739]: E0218 14:12:41.347939 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver podName:aa1b5b42-cc82-48f9-9cf8-9da8994d5199 nodeName:}" failed. No retries permitted until 2026-02-18 14:12:41.847921222 +0000 UTC m=+794.343642154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver") pod "collector-rhjbv" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199") : secret "collector-syslog-receiver" not found Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.348819 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.348908 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.349301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.349592 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.353846 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.354699 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.361049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.380434 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.386072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550402 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550496 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550554 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550582 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550845 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.550875 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.551245 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.551613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir" (OuterVolumeSpecName: "datadir") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.551762 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.552125 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config" (OuterVolumeSpecName: "config") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.552513 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.553966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm" (OuterVolumeSpecName: "kube-api-access-s5dgm") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "kube-api-access-s5dgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554000 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics" (OuterVolumeSpecName: "metrics") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554542 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token" (OuterVolumeSpecName: "sa-token") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp" (OuterVolumeSpecName: "tmp") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.554898 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token" (OuterVolumeSpecName: "collector-token") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652234 4739 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652280 4739 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652293 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652305 4739 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652316 4739 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652327 4739 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-datadir\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652340 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652353 4739 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652366 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5dgm\" (UniqueName: \"kubernetes.io/projected/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-kube-api-access-s5dgm\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.652382 4739 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.855478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:41 crc kubenswrapper[4739]: I0218 14:12:41.859385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"collector-rhjbv\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " pod="openshift-logging/collector-rhjbv" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.057879 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") pod \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\" (UID: \"aa1b5b42-cc82-48f9-9cf8-9da8994d5199\") " Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.060739 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "aa1b5b42-cc82-48f9-9cf8-9da8994d5199" (UID: "aa1b5b42-cc82-48f9-9cf8-9da8994d5199"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.160297 4739 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/aa1b5b42-cc82-48f9-9cf8-9da8994d5199-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.282953 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rhjbv" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.335788 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.346036 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ptdrt"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.347122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349269 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349520 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zpmx2" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349643 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349860 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.349992 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.352527 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-rhjbv"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.358549 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.359043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ptdrt"] Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.363029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-metrics\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.418990 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa1b5b42-cc82-48f9-9cf8-9da8994d5199" path="/var/lib/kubelet/pods/aa1b5b42-cc82-48f9-9cf8-9da8994d5199/volumes" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.465347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrkm\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-kube-api-access-nhrkm\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.465425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-metrics\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.465991 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config-openshift-service-cacrt\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-sa-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466288 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3d3df5da-d291-44d1-890f-4f094d9e8301-datadir\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d3df5da-d291-44d1-890f-4f094d9e8301-tmp\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466471 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-trusted-ca\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-syslog-receiver\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-entrypoint\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.466699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.470303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-metrics\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567570 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-trusted-ca\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567665 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-syslog-receiver\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567684 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-entrypoint\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrkm\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-kube-api-access-nhrkm\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config-openshift-service-cacrt\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-sa-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567902 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3d3df5da-d291-44d1-890f-4f094d9e8301-datadir\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.567958 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d3df5da-d291-44d1-890f-4f094d9e8301-tmp\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.568590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-trusted-ca\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.569068 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3d3df5da-d291-44d1-890f-4f094d9e8301-datadir\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.569330 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-entrypoint\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.570042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config-openshift-service-cacrt\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.570477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d3df5da-d291-44d1-890f-4f094d9e8301-config\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.571325 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3d3df5da-d291-44d1-890f-4f094d9e8301-tmp\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.572048 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.572481 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3d3df5da-d291-44d1-890f-4f094d9e8301-collector-syslog-receiver\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.585966 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-sa-token\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.586671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrkm\" (UniqueName: \"kubernetes.io/projected/3d3df5da-d291-44d1-890f-4f094d9e8301-kube-api-access-nhrkm\") pod \"collector-ptdrt\" (UID: \"3d3df5da-d291-44d1-890f-4f094d9e8301\") " pod="openshift-logging/collector-ptdrt" Feb 18 14:12:42 crc kubenswrapper[4739]: I0218 14:12:42.694706 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ptdrt" Feb 18 14:12:43 crc kubenswrapper[4739]: I0218 14:12:43.121661 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ptdrt"] Feb 18 14:12:43 crc kubenswrapper[4739]: W0218 14:12:43.125654 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d3df5da_d291_44d1_890f_4f094d9e8301.slice/crio-87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7 WatchSource:0}: Error finding container 87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7: Status 404 returned error can't find the container with id 87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7 Feb 18 14:12:43 crc kubenswrapper[4739]: I0218 14:12:43.292823 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ptdrt" event={"ID":"3d3df5da-d291-44d1-890f-4f094d9e8301","Type":"ContainerStarted","Data":"87ec7fbce368c7cdcd00a1b56d45e57beb9ad5b94ec3ab2ea5c2cc10c06058e7"} Feb 18 14:12:49 crc kubenswrapper[4739]: I0218 14:12:49.338594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ptdrt" event={"ID":"3d3df5da-d291-44d1-890f-4f094d9e8301","Type":"ContainerStarted","Data":"e5deeafef9dfa5065788c3d8bbe69dcaa4b097f1784edab75ef3b093d266bdd6"} Feb 18 14:12:49 crc kubenswrapper[4739]: I0218 14:12:49.361473 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ptdrt" podStartSLOduration=2.321570688 podStartE2EDuration="7.361459434s" podCreationTimestamp="2026-02-18 14:12:42 +0000 UTC" firstStartedPulling="2026-02-18 14:12:43.127489451 +0000 UTC m=+795.623210373" lastFinishedPulling="2026-02-18 14:12:48.167378197 +0000 UTC m=+800.663099119" observedRunningTime="2026-02-18 14:12:49.359492656 +0000 UTC m=+801.855213578" watchObservedRunningTime="2026-02-18 14:12:49.361459434 +0000 UTC m=+801.857180366" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.602047 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g"] Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.605150 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.606667 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.614278 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g"] Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.656964 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.657030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.657056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.758788 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.759069 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.780390 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:20 crc kubenswrapper[4739]: I0218 14:13:20.970895 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:21 crc kubenswrapper[4739]: I0218 14:13:21.428537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g"] Feb 18 14:13:21 crc kubenswrapper[4739]: I0218 14:13:21.568512 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerStarted","Data":"03c392543bcf7a212fd31fa833b25b81ada2374c6acf5495f28459c4fddb81e1"} Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.579267 4739 generic.go:334] "Generic (PLEG): container finished" podID="6bd02fb2-605c-422a-9c28-67afe997782a" containerID="26ad8b06e108e948d219f5bf70871fd3097f85023f35ef81fd8e5c1b2be6f5d7" exitCode=0 Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.579386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"26ad8b06e108e948d219f5bf70871fd3097f85023f35ef81fd8e5c1b2be6f5d7"} Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.930973 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.933255 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.953313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.992007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.992076 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:22 crc kubenswrapper[4739]: I0218 14:13:22.992187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.093934 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.093990 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.094039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.094505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.094557 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.117746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"redhat-operators-qm8vl\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.258536 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:23 crc kubenswrapper[4739]: I0218 14:13:23.658169 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:23 crc kubenswrapper[4739]: W0218 14:13:23.677822 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b40ab76_c055_427e_9e8a_f553ae86113c.slice/crio-472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772 WatchSource:0}: Error finding container 472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772: Status 404 returned error can't find the container with id 472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772 Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.596116 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" exitCode=0 Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.596534 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47"} Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.596628 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerStarted","Data":"472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772"} Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.599632 4739 generic.go:334] "Generic (PLEG): container finished" podID="6bd02fb2-605c-422a-9c28-67afe997782a" containerID="50a53bc980ed07dc86602f6de51cd28e5b32eab649a9d4da648c3c3de6a9cc42" exitCode=0 Feb 18 14:13:24 crc kubenswrapper[4739]: I0218 14:13:24.599692 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"50a53bc980ed07dc86602f6de51cd28e5b32eab649a9d4da648c3c3de6a9cc42"} Feb 18 14:13:25 crc kubenswrapper[4739]: I0218 14:13:25.609295 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerStarted","Data":"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f"} Feb 18 14:13:25 crc kubenswrapper[4739]: I0218 14:13:25.613144 4739 generic.go:334] "Generic (PLEG): container finished" podID="6bd02fb2-605c-422a-9c28-67afe997782a" containerID="1e2468c02ad86c812d40581838ffe7c6492e6248aa2f79e8096408c8f16ebdbd" exitCode=0 Feb 18 14:13:25 crc kubenswrapper[4739]: I0218 14:13:25.613187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"1e2468c02ad86c812d40581838ffe7c6492e6248aa2f79e8096408c8f16ebdbd"} Feb 18 14:13:26 crc kubenswrapper[4739]: I0218 14:13:26.620303 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" exitCode=0 Feb 18 14:13:26 crc kubenswrapper[4739]: I0218 14:13:26.620341 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f"} Feb 18 14:13:26 crc kubenswrapper[4739]: I0218 14:13:26.917782 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.048281 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") pod \"6bd02fb2-605c-422a-9c28-67afe997782a\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.048467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") pod \"6bd02fb2-605c-422a-9c28-67afe997782a\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.048730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") pod \"6bd02fb2-605c-422a-9c28-67afe997782a\" (UID: \"6bd02fb2-605c-422a-9c28-67afe997782a\") " Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.049009 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle" (OuterVolumeSpecName: "bundle") pod "6bd02fb2-605c-422a-9c28-67afe997782a" (UID: "6bd02fb2-605c-422a-9c28-67afe997782a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.049301 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.066861 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util" (OuterVolumeSpecName: "util") pod "6bd02fb2-605c-422a-9c28-67afe997782a" (UID: "6bd02fb2-605c-422a-9c28-67afe997782a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.083618 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz" (OuterVolumeSpecName: "kube-api-access-w85qz") pod "6bd02fb2-605c-422a-9c28-67afe997782a" (UID: "6bd02fb2-605c-422a-9c28-67afe997782a"). InnerVolumeSpecName "kube-api-access-w85qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.154361 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd02fb2-605c-422a-9c28-67afe997782a-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.154398 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w85qz\" (UniqueName: \"kubernetes.io/projected/6bd02fb2-605c-422a-9c28-67afe997782a-kube-api-access-w85qz\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.628140 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.628141 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g" event={"ID":"6bd02fb2-605c-422a-9c28-67afe997782a","Type":"ContainerDied","Data":"03c392543bcf7a212fd31fa833b25b81ada2374c6acf5495f28459c4fddb81e1"} Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.628590 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c392543bcf7a212fd31fa833b25b81ada2374c6acf5495f28459c4fddb81e1" Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.630321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerStarted","Data":"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0"} Feb 18 14:13:27 crc kubenswrapper[4739]: I0218 14:13:27.653117 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qm8vl" podStartSLOduration=3.206870662 podStartE2EDuration="5.653099223s" podCreationTimestamp="2026-02-18 14:13:22 +0000 UTC" firstStartedPulling="2026-02-18 14:13:24.598832482 +0000 UTC m=+837.094553404" lastFinishedPulling="2026-02-18 14:13:27.045061043 +0000 UTC m=+839.540781965" observedRunningTime="2026-02-18 14:13:27.649149437 +0000 UTC m=+840.144870369" watchObservedRunningTime="2026-02-18 14:13:27.653099223 +0000 UTC m=+840.148820145" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419047 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-77rqb"] Feb 18 14:13:30 crc kubenswrapper[4739]: E0218 14:13:30.419513 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="pull" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419525 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="pull" Feb 18 14:13:30 crc kubenswrapper[4739]: E0218 14:13:30.419536 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="extract" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419542 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="extract" Feb 18 14:13:30 crc kubenswrapper[4739]: E0218 14:13:30.419559 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="util" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419565 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="util" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.419690 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd02fb2-605c-422a-9c28-67afe997782a" containerName="extract" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.420271 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.422119 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jfk6h" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.422273 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.422289 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.432629 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-77rqb"] Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.506218 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlttt\" (UniqueName: \"kubernetes.io/projected/2f5c1234-49df-4f31-842f-cdaf04adff3c-kube-api-access-nlttt\") pod \"nmstate-operator-694c9596b7-77rqb\" (UID: \"2f5c1234-49df-4f31-842f-cdaf04adff3c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.607576 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlttt\" (UniqueName: \"kubernetes.io/projected/2f5c1234-49df-4f31-842f-cdaf04adff3c-kube-api-access-nlttt\") pod \"nmstate-operator-694c9596b7-77rqb\" (UID: \"2f5c1234-49df-4f31-842f-cdaf04adff3c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.628622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlttt\" (UniqueName: \"kubernetes.io/projected/2f5c1234-49df-4f31-842f-cdaf04adff3c-kube-api-access-nlttt\") pod \"nmstate-operator-694c9596b7-77rqb\" (UID: \"2f5c1234-49df-4f31-842f-cdaf04adff3c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:30 crc kubenswrapper[4739]: I0218 14:13:30.737588 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" Feb 18 14:13:31 crc kubenswrapper[4739]: I0218 14:13:31.272956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-77rqb"] Feb 18 14:13:31 crc kubenswrapper[4739]: W0218 14:13:31.279715 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f5c1234_49df_4f31_842f_cdaf04adff3c.slice/crio-a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d WatchSource:0}: Error finding container a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d: Status 404 returned error can't find the container with id a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d Feb 18 14:13:31 crc kubenswrapper[4739]: I0218 14:13:31.658890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" event={"ID":"2f5c1234-49df-4f31-842f-cdaf04adff3c","Type":"ContainerStarted","Data":"a3da4e204416676022d6792527349df1dfd1067a0b42cde0d88c8fafff073f7d"} Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.259019 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.259372 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.301706 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:33 crc kubenswrapper[4739]: I0218 14:13:33.732812 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:34 crc kubenswrapper[4739]: I0218 14:13:34.694276 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" event={"ID":"2f5c1234-49df-4f31-842f-cdaf04adff3c","Type":"ContainerStarted","Data":"56b4af6f1217a04bbd1a405ce01403d508c4770c8780d7c6d34a1e41809945b5"} Feb 18 14:13:34 crc kubenswrapper[4739]: I0218 14:13:34.712555 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-77rqb" podStartSLOduration=2.45339476 podStartE2EDuration="4.712520443s" podCreationTimestamp="2026-02-18 14:13:30 +0000 UTC" firstStartedPulling="2026-02-18 14:13:31.281629028 +0000 UTC m=+843.777349950" lastFinishedPulling="2026-02-18 14:13:33.540754711 +0000 UTC m=+846.036475633" observedRunningTime="2026-02-18 14:13:34.71157304 +0000 UTC m=+847.207293962" watchObservedRunningTime="2026-02-18 14:13:34.712520443 +0000 UTC m=+847.208241445" Feb 18 14:13:35 crc kubenswrapper[4739]: I0218 14:13:35.720185 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:35 crc kubenswrapper[4739]: I0218 14:13:35.720491 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qm8vl" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" containerID="cri-o://d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" gracePeriod=2 Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.103185 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.203413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") pod \"1b40ab76-c055-427e-9e8a-f553ae86113c\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.203488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") pod \"1b40ab76-c055-427e-9e8a-f553ae86113c\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.203538 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") pod \"1b40ab76-c055-427e-9e8a-f553ae86113c\" (UID: \"1b40ab76-c055-427e-9e8a-f553ae86113c\") " Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.204595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities" (OuterVolumeSpecName: "utilities") pod "1b40ab76-c055-427e-9e8a-f553ae86113c" (UID: "1b40ab76-c055-427e-9e8a-f553ae86113c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.209406 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc" (OuterVolumeSpecName: "kube-api-access-2fkwc") pod "1b40ab76-c055-427e-9e8a-f553ae86113c" (UID: "1b40ab76-c055-427e-9e8a-f553ae86113c"). InnerVolumeSpecName "kube-api-access-2fkwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.305696 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fkwc\" (UniqueName: \"kubernetes.io/projected/1b40ab76-c055-427e-9e8a-f553ae86113c-kube-api-access-2fkwc\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.305737 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712495 4739 generic.go:334] "Generic (PLEG): container finished" podID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" exitCode=0 Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712550 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0"} Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qm8vl" event={"ID":"1b40ab76-c055-427e-9e8a-f553ae86113c","Type":"ContainerDied","Data":"472d1c0686504098b4902a6b0ffad9bd6a5072f1ad104d52c9f68f91b00f0772"} Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712594 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qm8vl" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.712607 4739 scope.go:117] "RemoveContainer" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.732531 4739 scope.go:117] "RemoveContainer" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.760652 4739 scope.go:117] "RemoveContainer" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.777583 4739 scope.go:117] "RemoveContainer" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" Feb 18 14:13:36 crc kubenswrapper[4739]: E0218 14:13:36.781085 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0\": container with ID starting with d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0 not found: ID does not exist" containerID="d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781151 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0"} err="failed to get container status \"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0\": rpc error: code = NotFound desc = could not find container \"d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0\": container with ID starting with d841d31f6007bc98ebb3159f39bd268c881ea32089648fc552f19803534b28c0 not found: ID does not exist" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781192 4739 scope.go:117] "RemoveContainer" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" Feb 18 14:13:36 crc kubenswrapper[4739]: E0218 14:13:36.781540 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f\": container with ID starting with 8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f not found: ID does not exist" containerID="8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781579 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f"} err="failed to get container status \"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f\": rpc error: code = NotFound desc = could not find container \"8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f\": container with ID starting with 8571557eb9990ecf7bf734140c6fa8f089d0320f8fd95ceb1253253a72ca2b7f not found: ID does not exist" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.781603 4739 scope.go:117] "RemoveContainer" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" Feb 18 14:13:36 crc kubenswrapper[4739]: E0218 14:13:36.782032 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47\": container with ID starting with fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47 not found: ID does not exist" containerID="fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47" Feb 18 14:13:36 crc kubenswrapper[4739]: I0218 14:13:36.782067 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47"} err="failed to get container status \"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47\": rpc error: code = NotFound desc = could not find container \"fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47\": container with ID starting with fb1f1070bc85c1484ae1eb1848eed09c39d4bb15ed12aa5bc4e998a5726c4c47 not found: ID does not exist" Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.723341 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b40ab76-c055-427e-9e8a-f553ae86113c" (UID: "1b40ab76-c055-427e-9e8a-f553ae86113c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.726470 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b40ab76-c055-427e-9e8a-f553ae86113c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.941052 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:37 crc kubenswrapper[4739]: I0218 14:13:37.946657 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qm8vl"] Feb 18 14:13:38 crc kubenswrapper[4739]: I0218 14:13:38.421182 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" path="/var/lib/kubelet/pods/1b40ab76-c055-427e-9e8a-f553ae86113c/volumes" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.543651 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8"] Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.544616 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544657 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.544679 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-utilities" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544688 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-utilities" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.544701 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-content" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544709 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="extract-content" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.544878 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b40ab76-c055-427e-9e8a-f553ae86113c" containerName="registry-server" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.545888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.548121 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bqjx6" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.552167 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.553188 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.554338 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.557659 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.569592 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.574980 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xwm5v"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.576191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-ovs-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698600 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-dbus-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-nmstate-lock\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698872 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cstr\" (UniqueName: \"kubernetes.io/projected/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-kube-api-access-5cstr\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.698908 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.699010 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkhlx\" (UniqueName: \"kubernetes.io/projected/3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b-kube-api-access-xkhlx\") pod \"nmstate-metrics-58c85c668d-4l8z8\" (UID: \"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.699054 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtzg\" (UniqueName: \"kubernetes.io/projected/ff0bf868-48fc-48a7-845d-3286c1dd16f0-kube-api-access-qxtzg\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.710525 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.715675 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.717435 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.717531 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.723952 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ltrzj" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.727703 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.800903 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/292e9bf2-9674-423f-9ba5-4e83ff259a06-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.800973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkhlx\" (UniqueName: \"kubernetes.io/projected/3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b-kube-api-access-xkhlx\") pod \"nmstate-metrics-58c85c668d-4l8z8\" (UID: \"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801009 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtzg\" (UniqueName: \"kubernetes.io/projected/ff0bf868-48fc-48a7-845d-3286c1dd16f0-kube-api-access-qxtzg\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krl4p\" (UniqueName: \"kubernetes.io/projected/292e9bf2-9674-423f-9ba5-4e83ff259a06-kube-api-access-krl4p\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801103 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-ovs-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801147 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-dbus-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801201 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/292e9bf2-9674-423f-9ba5-4e83ff259a06-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-nmstate-lock\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801279 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cstr\" (UniqueName: \"kubernetes.io/projected/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-kube-api-access-5cstr\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801303 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.801482 4739 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801496 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-ovs-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: E0218 14:13:41.801554 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair podName:ff0bf868-48fc-48a7-845d-3286c1dd16f0 nodeName:}" failed. No retries permitted until 2026-02-18 14:13:42.301532683 +0000 UTC m=+854.797253605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair") pod "nmstate-webhook-866bcb46dc-wtz97" (UID: "ff0bf868-48fc-48a7-845d-3286c1dd16f0") : secret "openshift-nmstate-webhook" not found Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-dbus-socket\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.801879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-nmstate-lock\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.819598 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cstr\" (UniqueName: \"kubernetes.io/projected/547a8c99-05a3-45bf-9e45-785d6cdb8fb5-kube-api-access-5cstr\") pod \"nmstate-handler-xwm5v\" (UID: \"547a8c99-05a3-45bf-9e45-785d6cdb8fb5\") " pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.824950 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtzg\" (UniqueName: \"kubernetes.io/projected/ff0bf868-48fc-48a7-845d-3286c1dd16f0-kube-api-access-qxtzg\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.829774 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkhlx\" (UniqueName: \"kubernetes.io/projected/3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b-kube-api-access-xkhlx\") pod \"nmstate-metrics-58c85c668d-4l8z8\" (UID: \"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.868807 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.890880 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.892384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.895869 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.907476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/292e9bf2-9674-423f-9ba5-4e83ff259a06-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.907577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krl4p\" (UniqueName: \"kubernetes.io/projected/292e9bf2-9674-423f-9ba5-4e83ff259a06-kube-api-access-krl4p\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.907669 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/292e9bf2-9674-423f-9ba5-4e83ff259a06-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.908642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/292e9bf2-9674-423f-9ba5-4e83ff259a06-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.924206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/292e9bf2-9674-423f-9ba5-4e83ff259a06-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.927831 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:13:41 crc kubenswrapper[4739]: I0218 14:13:41.946104 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krl4p\" (UniqueName: \"kubernetes.io/projected/292e9bf2-9674-423f-9ba5-4e83ff259a06-kube-api-access-krl4p\") pod \"nmstate-console-plugin-5c78fc5d65-c8h9g\" (UID: \"292e9bf2-9674-423f-9ba5-4e83ff259a06\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.009913 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.009978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.009997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010025 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010077 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010105 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.010138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.041188 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.111677 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112486 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112619 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112645 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.112692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.113829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.114469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.115302 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.115840 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.122181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.122493 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.136176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"console-58cc898c97-gzzx9\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.281901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.318547 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.322067 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0bf868-48fc-48a7-845d-3286c1dd16f0-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-wtz97\" (UID: \"ff0bf868-48fc-48a7-845d-3286c1dd16f0\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.372575 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8"] Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.480975 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.570566 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g"] Feb 18 14:13:42 crc kubenswrapper[4739]: W0218 14:13:42.571178 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod292e9bf2_9674_423f_9ba5_4e83ff259a06.slice/crio-7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e WatchSource:0}: Error finding container 7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e: Status 404 returned error can't find the container with id 7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.734991 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.754191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" event={"ID":"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b","Type":"ContainerStarted","Data":"db934d01b5ad14806be80372f23deb93efa9d5ab36049317a3bbb8668bca66c5"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.755428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xwm5v" event={"ID":"547a8c99-05a3-45bf-9e45-785d6cdb8fb5","Type":"ContainerStarted","Data":"11ca5504b72f9aa7707686a2a3fee5372fa6474e212bf2540858e3cd76434747"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.757278 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerStarted","Data":"df9030b739dbc83cef12914ae8d05fcfaf3c9ae9c31af8304d4b753fc912b097"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.759216 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" event={"ID":"292e9bf2-9674-423f-9ba5-4e83ff259a06","Type":"ContainerStarted","Data":"7d3c31fe261df466571cb9e5f65f322987eed4df8dc9a806c7cc9344b617a57e"} Feb 18 14:13:42 crc kubenswrapper[4739]: I0218 14:13:42.895383 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97"] Feb 18 14:13:43 crc kubenswrapper[4739]: I0218 14:13:43.770306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" event={"ID":"ff0bf868-48fc-48a7-845d-3286c1dd16f0","Type":"ContainerStarted","Data":"e72279406b8aa4424db9dd94e06b27a62fb2614ebf2ab1c6e5b7641fdb647dc5"} Feb 18 14:13:43 crc kubenswrapper[4739]: I0218 14:13:43.772028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerStarted","Data":"0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f"} Feb 18 14:13:43 crc kubenswrapper[4739]: I0218 14:13:43.799344 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58cc898c97-gzzx9" podStartSLOduration=2.799298815 podStartE2EDuration="2.799298815s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:13:43.792936252 +0000 UTC m=+856.288657184" watchObservedRunningTime="2026-02-18 14:13:43.799298815 +0000 UTC m=+856.295019737" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.788964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" event={"ID":"292e9bf2-9674-423f-9ba5-4e83ff259a06","Type":"ContainerStarted","Data":"6e9a3456c2bbfed427d2566c878b3063321f404582a6342268041468bfa5cd9d"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.792073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" event={"ID":"ff0bf868-48fc-48a7-845d-3286c1dd16f0","Type":"ContainerStarted","Data":"b30eef48cdad31e60230ed1e35d86c82376d5afd7e030353eb6a5ee68ac7bff3"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.792343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.794089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" event={"ID":"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b","Type":"ContainerStarted","Data":"ebede555442eacc2748b40c607b90da96e870241f189750b9363114a60bcdf88"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.796077 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xwm5v" event={"ID":"547a8c99-05a3-45bf-9e45-785d6cdb8fb5","Type":"ContainerStarted","Data":"d5026099f7646b3ba5acdf68b47de85594cce7b67c2d1abc5c66313226ee4178"} Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.796322 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.810728 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c8h9g" podStartSLOduration=2.2918268 podStartE2EDuration="4.810710016s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:42.580805892 +0000 UTC m=+855.076526824" lastFinishedPulling="2026-02-18 14:13:45.099689078 +0000 UTC m=+857.595410040" observedRunningTime="2026-02-18 14:13:45.80669578 +0000 UTC m=+858.302416712" watchObservedRunningTime="2026-02-18 14:13:45.810710016 +0000 UTC m=+858.306430938" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.829819 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podStartSLOduration=2.63595184 podStartE2EDuration="4.829799955s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:42.904244745 +0000 UTC m=+855.399965667" lastFinishedPulling="2026-02-18 14:13:45.09809286 +0000 UTC m=+857.593813782" observedRunningTime="2026-02-18 14:13:45.82584845 +0000 UTC m=+858.321569372" watchObservedRunningTime="2026-02-18 14:13:45.829799955 +0000 UTC m=+858.325520887" Feb 18 14:13:45 crc kubenswrapper[4739]: I0218 14:13:45.844467 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xwm5v" podStartSLOduration=1.65273233 podStartE2EDuration="4.844433137s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:41.952162713 +0000 UTC m=+854.447883635" lastFinishedPulling="2026-02-18 14:13:45.14386348 +0000 UTC m=+857.639584442" observedRunningTime="2026-02-18 14:13:45.843784151 +0000 UTC m=+858.339505083" watchObservedRunningTime="2026-02-18 14:13:45.844433137 +0000 UTC m=+858.340154059" Feb 18 14:13:47 crc kubenswrapper[4739]: I0218 14:13:47.816372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" event={"ID":"3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b","Type":"ContainerStarted","Data":"c63a3d29fbd7009e66d95402576bdeeb8ab45a6edf7e55504cea6dbb0ea79c8f"} Feb 18 14:13:47 crc kubenswrapper[4739]: I0218 14:13:47.837819 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4l8z8" podStartSLOduration=1.896921749 podStartE2EDuration="6.837800553s" podCreationTimestamp="2026-02-18 14:13:41 +0000 UTC" firstStartedPulling="2026-02-18 14:13:42.384060243 +0000 UTC m=+854.879781165" lastFinishedPulling="2026-02-18 14:13:47.324939047 +0000 UTC m=+859.820659969" observedRunningTime="2026-02-18 14:13:47.833780486 +0000 UTC m=+860.329501428" watchObservedRunningTime="2026-02-18 14:13:47.837800553 +0000 UTC m=+860.333521465" Feb 18 14:13:51 crc kubenswrapper[4739]: I0218 14:13:51.927573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xwm5v" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.282018 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.282066 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.286484 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.854405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:13:52 crc kubenswrapper[4739]: I0218 14:13:52.911569 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:13:59 crc kubenswrapper[4739]: I0218 14:13:59.372873 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:13:59 crc kubenswrapper[4739]: I0218 14:13:59.373249 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:14:02 crc kubenswrapper[4739]: I0218 14:14:02.487060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 14:14:17 crc kubenswrapper[4739]: I0218 14:14:17.958233 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-796648847c-cwj5j" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" containerID="cri-o://ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" gracePeriod=15 Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.444320 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-796648847c-cwj5j_d4490109-c2b2-4264-b163-1e259f4b335c/console/0.log" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.444773 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511627 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511786 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511854 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511906 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.511982 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512025 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") pod \"d4490109-c2b2-4264-b163-1e259f4b335c\" (UID: \"d4490109-c2b2-4264-b163-1e259f4b335c\") " Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512815 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config" (OuterVolumeSpecName: "console-config") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512807 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.512866 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.513517 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.513544 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.513558 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.514985 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca" (OuterVolumeSpecName: "service-ca") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.521617 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.530722 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.536127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p" (OuterVolumeSpecName: "kube-api-access-v824p") pod "d4490109-c2b2-4264-b163-1e259f4b335c" (UID: "d4490109-c2b2-4264-b163-1e259f4b335c"). InnerVolumeSpecName "kube-api-access-v824p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.615902 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v824p\" (UniqueName: \"kubernetes.io/projected/d4490109-c2b2-4264-b163-1e259f4b335c-kube-api-access-v824p\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.615966 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.615986 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4490109-c2b2-4264-b163-1e259f4b335c-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:18 crc kubenswrapper[4739]: I0218 14:14:18.616002 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4490109-c2b2-4264-b163-1e259f4b335c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.076997 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-796648847c-cwj5j_d4490109-c2b2-4264-b163-1e259f4b335c/console/0.log" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077288 4739 generic.go:334] "Generic (PLEG): container finished" podID="d4490109-c2b2-4264-b163-1e259f4b335c" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" exitCode=2 Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerDied","Data":"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000"} Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077350 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796648847c-cwj5j" event={"ID":"d4490109-c2b2-4264-b163-1e259f4b335c","Type":"ContainerDied","Data":"ced41aeb18b143d7cb7b37389d8e7093c6f932a8b69ee8fd71755fd592dcd4fa"} Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077369 4739 scope.go:117] "RemoveContainer" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.077619 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796648847c-cwj5j" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.110160 4739 scope.go:117] "RemoveContainer" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" Feb 18 14:14:19 crc kubenswrapper[4739]: E0218 14:14:19.112094 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000\": container with ID starting with ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000 not found: ID does not exist" containerID="ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.112173 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000"} err="failed to get container status \"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000\": rpc error: code = NotFound desc = could not find container \"ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000\": container with ID starting with ef5a2a4cabc78a1a2c11ba8f8e1ad3c35b033c6035c4b005035b438814521000 not found: ID does not exist" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.120564 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.133562 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-796648847c-cwj5j"] Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.856160 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l"] Feb 18 14:14:19 crc kubenswrapper[4739]: E0218 14:14:19.856536 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.856551 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.856716 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" containerName="console" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.858003 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.860665 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.869051 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l"] Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.941043 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.941109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:19 crc kubenswrapper[4739]: I0218 14:14:19.941162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043244 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043759 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.043821 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.064522 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.174687 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.422195 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4490109-c2b2-4264-b163-1e259f4b335c" path="/var/lib/kubelet/pods/d4490109-c2b2-4264-b163-1e259f4b335c/volumes" Feb 18 14:14:20 crc kubenswrapper[4739]: I0218 14:14:20.652652 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l"] Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.098672 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerID="1a7b202e80c5eb13ad67a84ac5da7dbf5a09866eb3bb2c54dcc3f3e85e85eaab" exitCode=0 Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.098728 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"1a7b202e80c5eb13ad67a84ac5da7dbf5a09866eb3bb2c54dcc3f3e85e85eaab"} Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.098759 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerStarted","Data":"16fc8b4df0d353cf1de2e5a1109ebd6f73830749d657b3a4cc0dbd596b7a50ac"} Feb 18 14:14:21 crc kubenswrapper[4739]: I0218 14:14:21.100366 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:14:23 crc kubenswrapper[4739]: I0218 14:14:23.113154 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerID="5d7c58e409f79b400684d32c3e68db1d709a14c1b605f47d3dcc69243875b01c" exitCode=0 Feb 18 14:14:23 crc kubenswrapper[4739]: I0218 14:14:23.113264 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"5d7c58e409f79b400684d32c3e68db1d709a14c1b605f47d3dcc69243875b01c"} Feb 18 14:14:24 crc kubenswrapper[4739]: I0218 14:14:24.123312 4739 generic.go:334] "Generic (PLEG): container finished" podID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerID="aa216d02d45707b907c1ea5ff97cba6cdb5c1e78b62b23811ea6dc4ed59a01ca" exitCode=0 Feb 18 14:14:24 crc kubenswrapper[4739]: I0218 14:14:24.123374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"aa216d02d45707b907c1ea5ff97cba6cdb5c1e78b62b23811ea6dc4ed59a01ca"} Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.411955 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.434325 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") pod \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.434375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") pod \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.434403 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") pod \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\" (UID: \"0e9e5f51-e676-4cb2-8e3e-b07341a3029a\") " Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.435667 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle" (OuterVolumeSpecName: "bundle") pod "0e9e5f51-e676-4cb2-8e3e-b07341a3029a" (UID: "0e9e5f51-e676-4cb2-8e3e-b07341a3029a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.441297 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs" (OuterVolumeSpecName: "kube-api-access-zw4qs") pod "0e9e5f51-e676-4cb2-8e3e-b07341a3029a" (UID: "0e9e5f51-e676-4cb2-8e3e-b07341a3029a"). InnerVolumeSpecName "kube-api-access-zw4qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.449523 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util" (OuterVolumeSpecName: "util") pod "0e9e5f51-e676-4cb2-8e3e-b07341a3029a" (UID: "0e9e5f51-e676-4cb2-8e3e-b07341a3029a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.536257 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.536292 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw4qs\" (UniqueName: \"kubernetes.io/projected/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-kube-api-access-zw4qs\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:25 crc kubenswrapper[4739]: I0218 14:14:25.536304 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e9e5f51-e676-4cb2-8e3e-b07341a3029a-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:14:26 crc kubenswrapper[4739]: I0218 14:14:26.138508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" event={"ID":"0e9e5f51-e676-4cb2-8e3e-b07341a3029a","Type":"ContainerDied","Data":"16fc8b4df0d353cf1de2e5a1109ebd6f73830749d657b3a4cc0dbd596b7a50ac"} Feb 18 14:14:26 crc kubenswrapper[4739]: I0218 14:14:26.138552 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16fc8b4df0d353cf1de2e5a1109ebd6f73830749d657b3a4cc0dbd596b7a50ac" Feb 18 14:14:26 crc kubenswrapper[4739]: I0218 14:14:26.138565 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l" Feb 18 14:14:29 crc kubenswrapper[4739]: I0218 14:14:29.373036 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:14:29 crc kubenswrapper[4739]: I0218 14:14:29.373121 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.156772 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2"] Feb 18 14:14:34 crc kubenswrapper[4739]: E0218 14:14:34.157875 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="extract" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.157893 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="extract" Feb 18 14:14:34 crc kubenswrapper[4739]: E0218 14:14:34.157925 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="util" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.157935 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="util" Feb 18 14:14:34 crc kubenswrapper[4739]: E0218 14:14:34.157946 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="pull" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.157955 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="pull" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.158149 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9e5f51-e676-4cb2-8e3e-b07341a3029a" containerName="extract" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.158917 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.161842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.162485 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.162733 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-t5zkn" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.162905 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.163755 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.176288 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2"] Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.289459 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-webhook-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.289531 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5t5\" (UniqueName: \"kubernetes.io/projected/d5023d08-507d-422f-b218-72057e18ef93-kube-api-access-jt5t5\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.289610 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-apiservice-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.391495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-webhook-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.391565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5t5\" (UniqueName: \"kubernetes.io/projected/d5023d08-507d-422f-b218-72057e18ef93-kube-api-access-jt5t5\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.391612 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-apiservice-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.398182 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-webhook-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.398653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5023d08-507d-422f-b218-72057e18ef93-apiservice-cert\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.416388 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5t5\" (UniqueName: \"kubernetes.io/projected/d5023d08-507d-422f-b218-72057e18ef93-kube-api-access-jt5t5\") pod \"metallb-operator-controller-manager-5b78699c88-r8kr2\" (UID: \"d5023d08-507d-422f-b218-72057e18ef93\") " pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.479584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.612317 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g"] Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.613245 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.617752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.617829 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.617777 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n6rkn" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.693086 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g"] Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.697516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-apiservice-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.697581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-webhook-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.697608 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xtp\" (UniqueName: \"kubernetes.io/projected/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-kube-api-access-67xtp\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.798726 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xtp\" (UniqueName: \"kubernetes.io/projected/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-kube-api-access-67xtp\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.798948 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-apiservice-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.798981 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-webhook-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.810217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-webhook-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.835628 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xtp\" (UniqueName: \"kubernetes.io/projected/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-kube-api-access-67xtp\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.835993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0183ebc4-768c-4e08-8f1c-059fff8ba4e3-apiservice-cert\") pod \"metallb-operator-webhook-server-86f6cb9d5d-8jd6g\" (UID: \"0183ebc4-768c-4e08-8f1c-059fff8ba4e3\") " pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:34 crc kubenswrapper[4739]: I0218 14:14:34.934132 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:35 crc kubenswrapper[4739]: I0218 14:14:35.166843 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2"] Feb 18 14:14:35 crc kubenswrapper[4739]: I0218 14:14:35.210089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerStarted","Data":"5b6710b41c8c3c3644f4b8c7ac01fa4faf08df9fe0f14b63b0e3bdea2b28ef57"} Feb 18 14:14:35 crc kubenswrapper[4739]: I0218 14:14:35.408879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g"] Feb 18 14:14:36 crc kubenswrapper[4739]: I0218 14:14:36.217714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerStarted","Data":"eeb5cddbd6c550ba6e509048f55545f5fc2085fd1334451a36c2b9dc38277cd1"} Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.289869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerStarted","Data":"51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843"} Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.290398 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.291750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerStarted","Data":"f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2"} Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.291855 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.310330 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podStartSLOduration=2.128336893 podStartE2EDuration="8.310308488s" podCreationTimestamp="2026-02-18 14:14:34 +0000 UTC" firstStartedPulling="2026-02-18 14:14:35.427820038 +0000 UTC m=+907.923540960" lastFinishedPulling="2026-02-18 14:14:41.609791633 +0000 UTC m=+914.105512555" observedRunningTime="2026-02-18 14:14:42.30719526 +0000 UTC m=+914.802916202" watchObservedRunningTime="2026-02-18 14:14:42.310308488 +0000 UTC m=+914.806029410" Feb 18 14:14:42 crc kubenswrapper[4739]: I0218 14:14:42.328742 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" podStartSLOduration=1.911598516 podStartE2EDuration="8.328725244s" podCreationTimestamp="2026-02-18 14:14:34 +0000 UTC" firstStartedPulling="2026-02-18 14:14:35.173823939 +0000 UTC m=+907.669544861" lastFinishedPulling="2026-02-18 14:14:41.590950667 +0000 UTC m=+914.086671589" observedRunningTime="2026-02-18 14:14:42.325316809 +0000 UTC m=+914.821037731" watchObservedRunningTime="2026-02-18 14:14:42.328725244 +0000 UTC m=+914.824446166" Feb 18 14:14:54 crc kubenswrapper[4739]: I0218 14:14:54.938908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.372976 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.373583 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.373637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.374319 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:14:59 crc kubenswrapper[4739]: I0218 14:14:59.374373 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41" gracePeriod=600 Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.191063 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.192721 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.195402 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.195436 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.207748 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.342164 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.342287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.342374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.412763 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41" exitCode=0 Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.418888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41"} Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.418946 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0"} Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.418967 4739 scope.go:117] "RemoveContainer" containerID="7bcd6eb763d9647cbf8a9e5cc6f00d646bc23617c6a59561a2e57ce5ab39d939" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.443879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.443969 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.444072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.445019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.457834 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.474873 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"collect-profiles-29523735-tpw9l\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.519432 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:00 crc kubenswrapper[4739]: I0218 14:15:00.945963 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 14:15:00 crc kubenswrapper[4739]: W0218 14:15:00.950330 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2918ab_f9b2_46b1_9895_7de44312e98e.slice/crio-e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101 WatchSource:0}: Error finding container e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101: Status 404 returned error can't find the container with id e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101 Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.425076 4739 generic.go:334] "Generic (PLEG): container finished" podID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerID="a63b0fe82e01dc057994e21049631942cf32124ffb8f8b9b2acf4cf4375ae993" exitCode=0 Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.425256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" event={"ID":"8c2918ab-f9b2-46b1-9895-7de44312e98e","Type":"ContainerDied","Data":"a63b0fe82e01dc057994e21049631942cf32124ffb8f8b9b2acf4cf4375ae993"} Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.425434 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" event={"ID":"8c2918ab-f9b2-46b1-9895-7de44312e98e","Type":"ContainerStarted","Data":"e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101"} Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.538938 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.540624 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.553303 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.660875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.661175 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.661257 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.762512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.762909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.762995 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.763424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.763571 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.783465 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"community-operators-m8ss8\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:01 crc kubenswrapper[4739]: I0218 14:15:01.856941 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.408365 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:02 crc kubenswrapper[4739]: W0218 14:15:02.412965 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91302fcf_f057_4e35_9287_c67dfb9b396b.slice/crio-324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98 WatchSource:0}: Error finding container 324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98: Status 404 returned error can't find the container with id 324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98 Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.433832 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerStarted","Data":"324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98"} Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.829835 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.989417 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") pod \"8c2918ab-f9b2-46b1-9895-7de44312e98e\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.989562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") pod \"8c2918ab-f9b2-46b1-9895-7de44312e98e\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.989667 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") pod \"8c2918ab-f9b2-46b1-9895-7de44312e98e\" (UID: \"8c2918ab-f9b2-46b1-9895-7de44312e98e\") " Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.990896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c2918ab-f9b2-46b1-9895-7de44312e98e" (UID: "8c2918ab-f9b2-46b1-9895-7de44312e98e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.996296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c2918ab-f9b2-46b1-9895-7de44312e98e" (UID: "8c2918ab-f9b2-46b1-9895-7de44312e98e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:15:02 crc kubenswrapper[4739]: I0218 14:15:02.996700 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh" (OuterVolumeSpecName: "kube-api-access-bx6sh") pod "8c2918ab-f9b2-46b1-9895-7de44312e98e" (UID: "8c2918ab-f9b2-46b1-9895-7de44312e98e"). InnerVolumeSpecName "kube-api-access-bx6sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.091996 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx6sh\" (UniqueName: \"kubernetes.io/projected/8c2918ab-f9b2-46b1-9895-7de44312e98e-kube-api-access-bx6sh\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.092075 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2918ab-f9b2-46b1-9895-7de44312e98e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.092087 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2918ab-f9b2-46b1-9895-7de44312e98e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.444645 4739 generic.go:334] "Generic (PLEG): container finished" podID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerID="71ade2fe74ee7f12971412c96fb1c41dff453214ea31392830a3982382cdb404" exitCode=0 Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.444766 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"71ade2fe74ee7f12971412c96fb1c41dff453214ea31392830a3982382cdb404"} Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.448043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" event={"ID":"8c2918ab-f9b2-46b1-9895-7de44312e98e","Type":"ContainerDied","Data":"e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101"} Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.448086 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f60d232676e14e68a9fdce590dbde932e5f833aa177d2558207c93fda7b101" Feb 18 14:15:03 crc kubenswrapper[4739]: I0218 14:15:03.448122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.324206 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:04 crc kubenswrapper[4739]: E0218 14:15:04.325100 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerName="collect-profiles" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.325164 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerName="collect-profiles" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.325346 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" containerName="collect-profiles" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.326406 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.337579 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.457498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerStarted","Data":"e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176"} Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.518377 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.518662 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.518709 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621001 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621118 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621831 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.621833 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.644820 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"certified-operators-f88z9\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:04 crc kubenswrapper[4739]: I0218 14:15:04.944255 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.408024 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.468107 4739 generic.go:334] "Generic (PLEG): container finished" podID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerID="e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176" exitCode=0 Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.468509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176"} Feb 18 14:15:05 crc kubenswrapper[4739]: I0218 14:15:05.470853 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerStarted","Data":"41b5b1fa97b1f509032d4fb0932b3650e238b4807fc2c4bec6abbcc9cb202890"} Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.484061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerStarted","Data":"446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89"} Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.486144 4739 generic.go:334] "Generic (PLEG): container finished" podID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerID="53b0c048fa457de86b418f0b4656b992fed992fd83e70f9b96b2297374e4d95f" exitCode=0 Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.486179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"53b0c048fa457de86b418f0b4656b992fed992fd83e70f9b96b2297374e4d95f"} Feb 18 14:15:06 crc kubenswrapper[4739]: I0218 14:15:06.511216 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8ss8" podStartSLOduration=3.094751954 podStartE2EDuration="5.511199311s" podCreationTimestamp="2026-02-18 14:15:01 +0000 UTC" firstStartedPulling="2026-02-18 14:15:03.447119799 +0000 UTC m=+935.942840721" lastFinishedPulling="2026-02-18 14:15:05.863567146 +0000 UTC m=+938.359288078" observedRunningTime="2026-02-18 14:15:06.506177627 +0000 UTC m=+939.001898569" watchObservedRunningTime="2026-02-18 14:15:06.511199311 +0000 UTC m=+939.006920233" Feb 18 14:15:07 crc kubenswrapper[4739]: I0218 14:15:07.494154 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerStarted","Data":"6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108"} Feb 18 14:15:08 crc kubenswrapper[4739]: I0218 14:15:08.503737 4739 generic.go:334] "Generic (PLEG): container finished" podID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerID="6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108" exitCode=0 Feb 18 14:15:08 crc kubenswrapper[4739]: I0218 14:15:08.503902 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108"} Feb 18 14:15:09 crc kubenswrapper[4739]: I0218 14:15:09.521961 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerStarted","Data":"cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f"} Feb 18 14:15:09 crc kubenswrapper[4739]: I0218 14:15:09.540330 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f88z9" podStartSLOduration=3.108443087 podStartE2EDuration="5.540314457s" podCreationTimestamp="2026-02-18 14:15:04 +0000 UTC" firstStartedPulling="2026-02-18 14:15:06.487645738 +0000 UTC m=+938.983366660" lastFinishedPulling="2026-02-18 14:15:08.919517108 +0000 UTC m=+941.415238030" observedRunningTime="2026-02-18 14:15:09.539655491 +0000 UTC m=+942.035376433" watchObservedRunningTime="2026-02-18 14:15:09.540314457 +0000 UTC m=+942.036035369" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.721191 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.723935 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.743119 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.827093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.827304 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.827386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.857682 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.857745 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.906038 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.928658 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.928701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.928793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.929138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.929173 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:11 crc kubenswrapper[4739]: I0218 14:15:11.954431 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"redhat-marketplace-w6ms6\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:12 crc kubenswrapper[4739]: I0218 14:15:12.070601 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:12 crc kubenswrapper[4739]: I0218 14:15:12.584543 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:12 crc kubenswrapper[4739]: W0218 14:15:12.592867 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1d69322_06a6_4526_bb0c_be78ad5cd30d.slice/crio-e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec WatchSource:0}: Error finding container e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec: Status 404 returned error can't find the container with id e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec Feb 18 14:15:12 crc kubenswrapper[4739]: I0218 14:15:12.595176 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:13 crc kubenswrapper[4739]: I0218 14:15:13.552990 4739 generic.go:334] "Generic (PLEG): container finished" podID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" exitCode=0 Feb 18 14:15:13 crc kubenswrapper[4739]: I0218 14:15:13.553055 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010"} Feb 18 14:15:13 crc kubenswrapper[4739]: I0218 14:15:13.553388 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerStarted","Data":"e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec"} Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.483071 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.563901 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerStarted","Data":"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0"} Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.712118 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.712409 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8ss8" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" containerID="cri-o://446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89" gracePeriod=2 Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.945287 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.945342 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:14 crc kubenswrapper[4739]: I0218 14:15:14.998785 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.166190 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-w8l6z"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.172241 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.174260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-55s7l" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.174780 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.174895 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.176465 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.177653 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.181004 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.194985 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.276056 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8gqkq"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286260 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgdl\" (UniqueName: \"kubernetes.io/projected/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-kube-api-access-5lgdl\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286350 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxcz\" (UniqueName: \"kubernetes.io/projected/bf495248-0dde-4619-bce7-2cbbda1fd646-kube-api-access-gjxcz\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.286887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-sockets\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.288974 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-conf\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289389 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-reloader\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289622 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-startup\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.289650 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.295103 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-d5sjc" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.298849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.299084 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.299129 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.316284 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-tr2nx"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.320100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.323079 4739 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.344669 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-tr2nx"] Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.391925 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-conf\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392346 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmrq\" (UniqueName: \"kubernetes.io/projected/65fdc711-6806-433f-9f62-a09e816c6acf-kube-api-access-zvmrq\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392373 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-reloader\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-conf\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392395 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-startup\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392647 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-reloader\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.392655 4739 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.392831 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert podName:bf495248-0dde-4619-bce7-2cbbda1fd646 nodeName:}" failed. No retries permitted until 2026-02-18 14:15:15.892764175 +0000 UTC m=+948.388485097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert") pod "frr-k8s-webhook-server-78b44bf5bb-q8h4v" (UID: "bf495248-0dde-4619-bce7-2cbbda1fd646") : secret "frr-k8s-webhook-server-cert" not found Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.392982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65fdc711-6806-433f-9f62-a09e816c6acf-metallb-excludel2\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-startup\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgdl\" (UniqueName: \"kubernetes.io/projected/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-kube-api-access-5lgdl\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393705 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxcz\" (UniqueName: \"kubernetes.io/projected/bf495248-0dde-4619-bce7-2cbbda1fd646-kube-api-access-gjxcz\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393803 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-sockets\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.393839 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-metrics-certs\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.394045 4739 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.394092 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs podName:8ee20c2c-abb7-44a8-a5f9-8cacfce6f781 nodeName:}" failed. No retries permitted until 2026-02-18 14:15:15.894077828 +0000 UTC m=+948.389798750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs") pod "frr-k8s-w8l6z" (UID: "8ee20c2c-abb7-44a8-a5f9-8cacfce6f781") : secret "frr-k8s-certs-secret" not found Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.394561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-frr-sockets\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.415085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgdl\" (UniqueName: \"kubernetes.io/projected/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-kube-api-access-5lgdl\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.416676 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxcz\" (UniqueName: \"kubernetes.io/projected/bf495248-0dde-4619-bce7-2cbbda1fd646-kube-api-access-gjxcz\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmrq\" (UniqueName: \"kubernetes.io/projected/65fdc711-6806-433f-9f62-a09e816c6acf-kube-api-access-zvmrq\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496518 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65fdc711-6806-433f-9f62-a09e816c6acf-metallb-excludel2\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-cert\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496640 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-metrics-certs\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496666 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-metrics-certs\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.496774 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6drjl\" (UniqueName: \"kubernetes.io/projected/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-kube-api-access-6drjl\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.496801 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 14:15:15 crc kubenswrapper[4739]: E0218 14:15:15.496856 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist podName:65fdc711-6806-433f-9f62-a09e816c6acf nodeName:}" failed. No retries permitted until 2026-02-18 14:15:15.996841222 +0000 UTC m=+948.492562144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist") pod "speaker-8gqkq" (UID: "65fdc711-6806-433f-9f62-a09e816c6acf") : secret "metallb-memberlist" not found Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.497501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/65fdc711-6806-433f-9f62-a09e816c6acf-metallb-excludel2\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.502233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-metrics-certs\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.527158 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmrq\" (UniqueName: \"kubernetes.io/projected/65fdc711-6806-433f-9f62-a09e816c6acf-kube-api-access-zvmrq\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.573363 4739 generic.go:334] "Generic (PLEG): container finished" podID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" exitCode=0 Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.573466 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0"} Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.576040 4739 generic.go:334] "Generic (PLEG): container finished" podID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerID="446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89" exitCode=0 Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.576118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89"} Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.598344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-cert\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.598397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-metrics-certs\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.598502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6drjl\" (UniqueName: \"kubernetes.io/projected/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-kube-api-access-6drjl\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.620539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-metrics-certs\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.621643 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-cert\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.640975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6drjl\" (UniqueName: \"kubernetes.io/projected/7bcf09d7-a0a6-4225-a222-1c05f51e5f7d-kube-api-access-6drjl\") pod \"controller-69bbfbf88f-tr2nx\" (UID: \"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d\") " pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.673742 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.677831 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.903660 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.904031 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.909082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ee20c2c-abb7-44a8-a5f9-8cacfce6f781-metrics-certs\") pod \"frr-k8s-w8l6z\" (UID: \"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781\") " pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:15 crc kubenswrapper[4739]: I0218 14:15:15.909440 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bf495248-0dde-4619-bce7-2cbbda1fd646-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q8h4v\" (UID: \"bf495248-0dde-4619-bce7-2cbbda1fd646\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.006283 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:16 crc kubenswrapper[4739]: E0218 14:15:16.006528 4739 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 14:15:16 crc kubenswrapper[4739]: E0218 14:15:16.006629 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist podName:65fdc711-6806-433f-9f62-a09e816c6acf nodeName:}" failed. No retries permitted until 2026-02-18 14:15:17.006603353 +0000 UTC m=+949.502324275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist") pod "speaker-8gqkq" (UID: "65fdc711-6806-433f-9f62-a09e816c6acf") : secret "metallb-memberlist" not found Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.097014 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.107199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.164107 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.309762 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-tr2nx"] Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.311017 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") pod \"91302fcf-f057-4e35-9287-c67dfb9b396b\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.311108 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") pod \"91302fcf-f057-4e35-9287-c67dfb9b396b\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.311136 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") pod \"91302fcf-f057-4e35-9287-c67dfb9b396b\" (UID: \"91302fcf-f057-4e35-9287-c67dfb9b396b\") " Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.312334 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities" (OuterVolumeSpecName: "utilities") pod "91302fcf-f057-4e35-9287-c67dfb9b396b" (UID: "91302fcf-f057-4e35-9287-c67dfb9b396b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.315575 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn" (OuterVolumeSpecName: "kube-api-access-988kn") pod "91302fcf-f057-4e35-9287-c67dfb9b396b" (UID: "91302fcf-f057-4e35-9287-c67dfb9b396b"). InnerVolumeSpecName "kube-api-access-988kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.413789 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.413829 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-988kn\" (UniqueName: \"kubernetes.io/projected/91302fcf-f057-4e35-9287-c67dfb9b396b-kube-api-access-988kn\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.454302 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91302fcf-f057-4e35-9287-c67dfb9b396b" (UID: "91302fcf-f057-4e35-9287-c67dfb9b396b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.515492 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91302fcf-f057-4e35-9287-c67dfb9b396b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.595765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"72302f965aab99323370179ac49243577654ec94472789c9404e6d9268db802d"} Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.600492 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v"] Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.607056 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8ss8" event={"ID":"91302fcf-f057-4e35-9287-c67dfb9b396b","Type":"ContainerDied","Data":"324e77c23f7c5fa6083dab3a0d4ac0b672a850505019a44ab6b6ebf08324aa98"} Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.607101 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8ss8" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.607117 4739 scope.go:117] "RemoveContainer" containerID="446e617bb4a35e73a566529673a4e33b0b816e8297774dc987dd15b6a9fb9a89" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.611898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9"} Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.611926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"4d75ca7837bbd3dcbc04c2d2a485376c2bbd7ba61474af3350c20db033a86d3a"} Feb 18 14:15:16 crc kubenswrapper[4739]: W0218 14:15:16.620298 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf495248_0dde_4619_bce7_2cbbda1fd646.slice/crio-10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12 WatchSource:0}: Error finding container 10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12: Status 404 returned error can't find the container with id 10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12 Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.668732 4739 scope.go:117] "RemoveContainer" containerID="e4dc897a4ecdb78cdabbf2e1e8ef1646b488972fc4ea441479e3e052fca42176" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.694137 4739 scope.go:117] "RemoveContainer" containerID="71ade2fe74ee7f12971412c96fb1c41dff453214ea31392830a3982382cdb404" Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.702226 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:16 crc kubenswrapper[4739]: I0218 14:15:16.708605 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8ss8"] Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.024113 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.030094 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/65fdc711-6806-433f-9f62-a09e816c6acf-memberlist\") pod \"speaker-8gqkq\" (UID: \"65fdc711-6806-433f-9f62-a09e816c6acf\") " pod="metallb-system/speaker-8gqkq" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.169323 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:17 crc kubenswrapper[4739]: W0218 14:15:17.199753 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65fdc711_6806_433f_9f62_a09e816c6acf.slice/crio-4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875 WatchSource:0}: Error finding container 4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875: Status 404 returned error can't find the container with id 4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875 Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.620088 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"6d1d5ce500775d152181c59d53937e176ad0f24dbd787c28625bb76e8ba661ec"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.621511 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.624902 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerStarted","Data":"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.626964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" event={"ID":"bf495248-0dde-4619-bce7-2cbbda1fd646","Type":"ContainerStarted","Data":"10afffbfc38b905301885591e8c82407aac3135e27d325f7653f1742e43b4a12"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.631432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.631497 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"4068d9cd6f50fd513cb3b5db145a9297201d7f2fcc8c88484f21685dc268f875"} Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.649709 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-tr2nx" podStartSLOduration=2.649685873 podStartE2EDuration="2.649685873s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:15:17.647615811 +0000 UTC m=+950.143336733" watchObservedRunningTime="2026-02-18 14:15:17.649685873 +0000 UTC m=+950.145406805" Feb 18 14:15:17 crc kubenswrapper[4739]: I0218 14:15:17.672477 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w6ms6" podStartSLOduration=3.678523792 podStartE2EDuration="6.672458917s" podCreationTimestamp="2026-02-18 14:15:11 +0000 UTC" firstStartedPulling="2026-02-18 14:15:13.554468841 +0000 UTC m=+946.050189763" lastFinishedPulling="2026-02-18 14:15:16.548403966 +0000 UTC m=+949.044124888" observedRunningTime="2026-02-18 14:15:17.669460852 +0000 UTC m=+950.165181794" watchObservedRunningTime="2026-02-18 14:15:17.672458917 +0000 UTC m=+950.168179849" Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.112421 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.112694 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f88z9" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" containerID="cri-o://cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f" gracePeriod=2 Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.423624 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" path="/var/lib/kubelet/pods/91302fcf-f057-4e35-9287-c67dfb9b396b/volumes" Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.641894 4739 generic.go:334] "Generic (PLEG): container finished" podID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerID="cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f" exitCode=0 Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.641966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f"} Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.644798 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"1dd001ef3c188c7b8b2c41bc5a869d1fbda4c7d1806b440e90793ffcabf78902"} Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.645370 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:18 crc kubenswrapper[4739]: I0218 14:15:18.667653 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8gqkq" podStartSLOduration=3.667636415 podStartE2EDuration="3.667636415s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:15:18.663728959 +0000 UTC m=+951.159449901" watchObservedRunningTime="2026-02-18 14:15:18.667636415 +0000 UTC m=+951.163357337" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.272503 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.276303 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") pod \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.276348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") pod \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.276487 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") pod \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\" (UID: \"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8\") " Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.278273 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities" (OuterVolumeSpecName: "utilities") pod "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" (UID: "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.293156 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn" (OuterVolumeSpecName: "kube-api-access-jwcnn") pod "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" (UID: "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8"). InnerVolumeSpecName "kube-api-access-jwcnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.335003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" (UID: "b5903958-ccb8-4c15-b6b0-275a1ab3f3e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.378217 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.378262 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwcnn\" (UniqueName: \"kubernetes.io/projected/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-kube-api-access-jwcnn\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.378273 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.654578 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88z9" event={"ID":"b5903958-ccb8-4c15-b6b0-275a1ab3f3e8","Type":"ContainerDied","Data":"41b5b1fa97b1f509032d4fb0932b3650e238b4807fc2c4bec6abbcc9cb202890"} Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.654644 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88z9" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.654666 4739 scope.go:117] "RemoveContainer" containerID="cc557a8dbc62cb50f336eee295b41266868c40c03bb8377f8c4e3980b08dbe3f" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.697266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.706078 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f88z9"] Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.706915 4739 scope.go:117] "RemoveContainer" containerID="6e6190ad875fe157da0a09a2515c4a706de6d4f39b8ace7b14ac7a871a557108" Feb 18 14:15:19 crc kubenswrapper[4739]: I0218 14:15:19.741817 4739 scope.go:117] "RemoveContainer" containerID="53b0c048fa457de86b418f0b4656b992fed992fd83e70f9b96b2297374e4d95f" Feb 18 14:15:20 crc kubenswrapper[4739]: I0218 14:15:20.422265 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" path="/var/lib/kubelet/pods/b5903958-ccb8-4c15-b6b0-275a1ab3f3e8/volumes" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.070846 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.071063 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.120199 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:22 crc kubenswrapper[4739]: I0218 14:15:22.741474 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:23 crc kubenswrapper[4739]: I0218 14:15:23.714821 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:24 crc kubenswrapper[4739]: I0218 14:15:24.700683 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w6ms6" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" containerID="cri-o://45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" gracePeriod=2 Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.214501 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.383053 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") pod \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.383403 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") pod \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.383436 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") pod \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\" (UID: \"c1d69322-06a6-4526-bb0c-be78ad5cd30d\") " Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.384782 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities" (OuterVolumeSpecName: "utilities") pod "c1d69322-06a6-4526-bb0c-be78ad5cd30d" (UID: "c1d69322-06a6-4526-bb0c-be78ad5cd30d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.389128 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z" (OuterVolumeSpecName: "kube-api-access-t4w8z") pod "c1d69322-06a6-4526-bb0c-be78ad5cd30d" (UID: "c1d69322-06a6-4526-bb0c-be78ad5cd30d"). InnerVolumeSpecName "kube-api-access-t4w8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.486521 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4w8z\" (UniqueName: \"kubernetes.io/projected/c1d69322-06a6-4526-bb0c-be78ad5cd30d-kube-api-access-t4w8z\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.486709 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.565089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1d69322-06a6-4526-bb0c-be78ad5cd30d" (UID: "c1d69322-06a6-4526-bb0c-be78ad5cd30d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.587817 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d69322-06a6-4526-bb0c-be78ad5cd30d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.708815 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="9fb59546736878df9c22754f43e72cb090776382ecdd7901f64cc1b5ca20d30f" exitCode=0 Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.708885 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"9fb59546736878df9c22754f43e72cb090776382ecdd7901f64cc1b5ca20d30f"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713587 4739 generic.go:334] "Generic (PLEG): container finished" podID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" exitCode=0 Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6ms6" event={"ID":"c1d69322-06a6-4526-bb0c-be78ad5cd30d","Type":"ContainerDied","Data":"e64d1cb00401da95c256148f61aaf82a1d57b40a900e4149df42275d07d8deec"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713700 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6ms6" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.713777 4739 scope.go:117] "RemoveContainer" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.715069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" event={"ID":"bf495248-0dde-4619-bce7-2cbbda1fd646","Type":"ContainerStarted","Data":"f193f450786c60c2f37e5a77c47cc484056cd8e9abe8c794be08a7f19c0d6903"} Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.715643 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.739739 4739 scope.go:117] "RemoveContainer" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.753625 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podStartSLOduration=2.469712797 podStartE2EDuration="10.753607792s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="2026-02-18 14:15:16.623106416 +0000 UTC m=+949.118827338" lastFinishedPulling="2026-02-18 14:15:24.907001411 +0000 UTC m=+957.402722333" observedRunningTime="2026-02-18 14:15:25.75311574 +0000 UTC m=+958.248836672" watchObservedRunningTime="2026-02-18 14:15:25.753607792 +0000 UTC m=+958.249328714" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.777514 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.785624 4739 scope.go:117] "RemoveContainer" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.789581 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6ms6"] Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.808796 4739 scope.go:117] "RemoveContainer" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" Feb 18 14:15:25 crc kubenswrapper[4739]: E0218 14:15:25.809277 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d\": container with ID starting with 45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d not found: ID does not exist" containerID="45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.809309 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d"} err="failed to get container status \"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d\": rpc error: code = NotFound desc = could not find container \"45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d\": container with ID starting with 45e1a4dca30f863c3fdd28894cafed8703d7a97789bd59d8581d78a26ed8d17d not found: ID does not exist" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.809332 4739 scope.go:117] "RemoveContainer" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" Feb 18 14:15:25 crc kubenswrapper[4739]: E0218 14:15:25.810246 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0\": container with ID starting with 4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0 not found: ID does not exist" containerID="4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.810270 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0"} err="failed to get container status \"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0\": rpc error: code = NotFound desc = could not find container \"4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0\": container with ID starting with 4d298e39f8640dad33b091b7d6ac236dd76a1087678a6709231f48d290f955f0 not found: ID does not exist" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.810284 4739 scope.go:117] "RemoveContainer" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" Feb 18 14:15:25 crc kubenswrapper[4739]: E0218 14:15:25.810739 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010\": container with ID starting with 9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010 not found: ID does not exist" containerID="9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010" Feb 18 14:15:25 crc kubenswrapper[4739]: I0218 14:15:25.810762 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010"} err="failed to get container status \"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010\": rpc error: code = NotFound desc = could not find container \"9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010\": container with ID starting with 9dadcc09cca86fbdc712ca2244ebf4d3a1f07ef7fa23b75c6e76d225f2612010 not found: ID does not exist" Feb 18 14:15:26 crc kubenswrapper[4739]: I0218 14:15:26.420473 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" path="/var/lib/kubelet/pods/c1d69322-06a6-4526-bb0c-be78ad5cd30d/volumes" Feb 18 14:15:26 crc kubenswrapper[4739]: I0218 14:15:26.724499 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="2e6b4ed84ac523c6e3507964c5be205d57bc199e6c95f3a77dd2245daf60fdb1" exitCode=0 Feb 18 14:15:26 crc kubenswrapper[4739]: I0218 14:15:26.724572 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"2e6b4ed84ac523c6e3507964c5be205d57bc199e6c95f3a77dd2245daf60fdb1"} Feb 18 14:15:27 crc kubenswrapper[4739]: I0218 14:15:27.174303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8gqkq" Feb 18 14:15:27 crc kubenswrapper[4739]: I0218 14:15:27.735046 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="b27ec3209a4a1bf86065d475c6c4fd1737d6aa46155833527d9114a5ddf2cfd7" exitCode=0 Feb 18 14:15:27 crc kubenswrapper[4739]: I0218 14:15:27.735129 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"b27ec3209a4a1bf86065d475c6c4fd1737d6aa46155833527d9114a5ddf2cfd7"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751030 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"a671b9560b84c2bc2337e7cd0dbd0611b4e01b445f1313409dce388c125db15e"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"ef158908c5c0a8407b5e65bec469b2eb70cab108e0e4cb3f92bca1b90e937911"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"239a0d1abe9b57abf7c29d7eb2654954b99c35d6af28d597ae1aa5e0324e8a86"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378"} Feb 18 14:15:28 crc kubenswrapper[4739]: I0218 14:15:28.751382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"d7ace940b5988463e3b8c7226207627946089b351b948bda4a9be22ff01d488d"} Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.763551 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"e7475631559454730a0a662325b3f48366fe7bb27b8e8120bbb67c00be5149a3"} Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.763987 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.806988 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-w8l6z" podStartSLOduration=6.353931564 podStartE2EDuration="14.806965077s" podCreationTimestamp="2026-02-18 14:15:15 +0000 UTC" firstStartedPulling="2026-02-18 14:15:16.454553233 +0000 UTC m=+948.950274155" lastFinishedPulling="2026-02-18 14:15:24.907586746 +0000 UTC m=+957.403307668" observedRunningTime="2026-02-18 14:15:29.794915458 +0000 UTC m=+962.290636390" watchObservedRunningTime="2026-02-18 14:15:29.806965077 +0000 UTC m=+962.302686019" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821079 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821483 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821502 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821530 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821546 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821554 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821580 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821587 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821598 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821604 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821621 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821628 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821635 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821641 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821655 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821662 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="extract-content" Feb 18 14:15:29 crc kubenswrapper[4739]: E0218 14:15:29.821674 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821681 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="extract-utilities" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821833 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d69322-06a6-4526-bb0c-be78ad5cd30d" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821852 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5903958-ccb8-4c15-b6b0-275a1ab3f3e8" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.821864 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="91302fcf-f057-4e35-9287-c67dfb9b396b" containerName="registry-server" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.822495 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.828247 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.828875 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-hnndv" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.829080 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.836122 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:29 crc kubenswrapper[4739]: I0218 14:15:29.969195 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"openstack-operator-index-pkgt6\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.071582 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"openstack-operator-index-pkgt6\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.095317 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"openstack-operator-index-pkgt6\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.148528 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.643588 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:30 crc kubenswrapper[4739]: W0218 14:15:30.646414 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod963fc9d2_81a3_4bff_babb_9a1fb7115773.slice/crio-c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863 WatchSource:0}: Error finding container c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863: Status 404 returned error can't find the container with id c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863 Feb 18 14:15:30 crc kubenswrapper[4739]: I0218 14:15:30.779972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerStarted","Data":"c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863"} Feb 18 14:15:31 crc kubenswrapper[4739]: I0218 14:15:31.097527 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:31 crc kubenswrapper[4739]: I0218 14:15:31.140015 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.188036 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.790109 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cnhvq"] Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.791248 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.798925 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnhvq"] Feb 18 14:15:33 crc kubenswrapper[4739]: I0218 14:15:33.946972 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqq8n\" (UniqueName: \"kubernetes.io/projected/07815587-810f-4c17-a671-8c613b3755d6-kube-api-access-pqq8n\") pod \"openstack-operator-index-cnhvq\" (UID: \"07815587-810f-4c17-a671-8c613b3755d6\") " pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.048898 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqq8n\" (UniqueName: \"kubernetes.io/projected/07815587-810f-4c17-a671-8c613b3755d6-kube-api-access-pqq8n\") pod \"openstack-operator-index-cnhvq\" (UID: \"07815587-810f-4c17-a671-8c613b3755d6\") " pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.070213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqq8n\" (UniqueName: \"kubernetes.io/projected/07815587-810f-4c17-a671-8c613b3755d6-kube-api-access-pqq8n\") pod \"openstack-operator-index-cnhvq\" (UID: \"07815587-810f-4c17-a671-8c613b3755d6\") " pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.113927 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.561234 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cnhvq"] Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.813726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerStarted","Data":"9505b21dad977e1c9574975d46a44ddbf423ca920e9dce1f532ba31a4a892548"} Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.815940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerStarted","Data":"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7"} Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.816063 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pkgt6" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" containerID="cri-o://7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" gracePeriod=2 Feb 18 14:15:34 crc kubenswrapper[4739]: I0218 14:15:34.845281 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pkgt6" podStartSLOduration=2.665624203 podStartE2EDuration="5.845239606s" podCreationTimestamp="2026-02-18 14:15:29 +0000 UTC" firstStartedPulling="2026-02-18 14:15:30.648570363 +0000 UTC m=+963.144291285" lastFinishedPulling="2026-02-18 14:15:33.828185766 +0000 UTC m=+966.323906688" observedRunningTime="2026-02-18 14:15:34.83610933 +0000 UTC m=+967.331830252" watchObservedRunningTime="2026-02-18 14:15:34.845239606 +0000 UTC m=+967.340960568" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.302658 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.473969 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") pod \"963fc9d2-81a3-4bff-babb-9a1fb7115773\" (UID: \"963fc9d2-81a3-4bff-babb-9a1fb7115773\") " Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.492596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr" (OuterVolumeSpecName: "kube-api-access-jpzxr") pod "963fc9d2-81a3-4bff-babb-9a1fb7115773" (UID: "963fc9d2-81a3-4bff-babb-9a1fb7115773"). InnerVolumeSpecName "kube-api-access-jpzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.576194 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpzxr\" (UniqueName: \"kubernetes.io/projected/963fc9d2-81a3-4bff-babb-9a1fb7115773-kube-api-access-jpzxr\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.683187 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824562 4739 generic.go:334] "Generic (PLEG): container finished" podID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" exitCode=0 Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerDied","Data":"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7"} Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824632 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pkgt6" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824647 4739 scope.go:117] "RemoveContainer" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.824635 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pkgt6" event={"ID":"963fc9d2-81a3-4bff-babb-9a1fb7115773","Type":"ContainerDied","Data":"c5574386456e272c32c036f065922a9ec16cab39222d7fd39ec5aa7c6a71a863"} Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.851656 4739 scope.go:117] "RemoveContainer" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" Feb 18 14:15:35 crc kubenswrapper[4739]: E0218 14:15:35.852636 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7\": container with ID starting with 7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7 not found: ID does not exist" containerID="7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.852690 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7"} err="failed to get container status \"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7\": rpc error: code = NotFound desc = could not find container \"7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7\": container with ID starting with 7483f99650425898701e5f3aceb995224de2b3aae5b3f2e089bf21df6f722ea7 not found: ID does not exist" Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.867680 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:35 crc kubenswrapper[4739]: I0218 14:15:35.874005 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pkgt6"] Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.170484 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.420706 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" path="/var/lib/kubelet/pods/963fc9d2-81a3-4bff-babb-9a1fb7115773/volumes" Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.833305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerStarted","Data":"f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea"} Feb 18 14:15:36 crc kubenswrapper[4739]: I0218 14:15:36.857947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cnhvq" podStartSLOduration=3.367101265 podStartE2EDuration="3.857925997s" podCreationTimestamp="2026-02-18 14:15:33 +0000 UTC" firstStartedPulling="2026-02-18 14:15:34.570019152 +0000 UTC m=+967.065740074" lastFinishedPulling="2026-02-18 14:15:35.060843884 +0000 UTC m=+967.556564806" observedRunningTime="2026-02-18 14:15:36.850863502 +0000 UTC m=+969.346584424" watchObservedRunningTime="2026-02-18 14:15:36.857925997 +0000 UTC m=+969.353646919" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.114772 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.116406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.149958 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:44 crc kubenswrapper[4739]: I0218 14:15:44.920690 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 14:15:46 crc kubenswrapper[4739]: I0218 14:15:46.102234 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-w8l6z" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.261171 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq"] Feb 18 14:15:48 crc kubenswrapper[4739]: E0218 14:15:48.261854 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.261870 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.262023 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="963fc9d2-81a3-4bff-babb-9a1fb7115773" containerName="registry-server" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.263112 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.273347 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-klmk6" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.279548 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq"] Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.396876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.396999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.397045 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.499184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.499289 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.499309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.500055 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.500190 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.518395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:48 crc kubenswrapper[4739]: I0218 14:15:48.594669 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.091342 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq"] Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.926047 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerID="32de7c6e239f1a69b1c74587fb009358e16bcf6b229d6593aea826e2cf650bb8" exitCode=0 Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.926201 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"32de7c6e239f1a69b1c74587fb009358e16bcf6b229d6593aea826e2cf650bb8"} Feb 18 14:15:49 crc kubenswrapper[4739]: I0218 14:15:49.927698 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerStarted","Data":"6a3167aabfdbaeb9a39dd02415677d240e1997d1df9fa884cccff5b8dce7d89f"} Feb 18 14:15:50 crc kubenswrapper[4739]: I0218 14:15:50.936907 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerID="f18e1854dd5dfb60bd94c9d9af2d4022fc31b8efbeb1359e9c55eb85e25412d8" exitCode=0 Feb 18 14:15:50 crc kubenswrapper[4739]: I0218 14:15:50.937006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"f18e1854dd5dfb60bd94c9d9af2d4022fc31b8efbeb1359e9c55eb85e25412d8"} Feb 18 14:15:51 crc kubenswrapper[4739]: I0218 14:15:51.950660 4739 generic.go:334] "Generic (PLEG): container finished" podID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerID="8e34dc4d4a2bf97a56bb2ed7f9d89a54d54cb6b68824876ac7647a80ec5532f1" exitCode=0 Feb 18 14:15:51 crc kubenswrapper[4739]: I0218 14:15:51.950704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"8e34dc4d4a2bf97a56bb2ed7f9d89a54d54cb6b68824876ac7647a80ec5532f1"} Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.295611 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.389570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") pod \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.389743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") pod \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.389761 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") pod \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\" (UID: \"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90\") " Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.390355 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle" (OuterVolumeSpecName: "bundle") pod "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" (UID: "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.396651 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm" (OuterVolumeSpecName: "kube-api-access-4vqqm") pod "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" (UID: "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90"). InnerVolumeSpecName "kube-api-access-4vqqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.435042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util" (OuterVolumeSpecName: "util") pod "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" (UID: "d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.491762 4739 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.491793 4739 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-util\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.491802 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vqqm\" (UniqueName: \"kubernetes.io/projected/d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90-kube-api-access-4vqqm\") on node \"crc\" DevicePath \"\"" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.969420 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" event={"ID":"d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90","Type":"ContainerDied","Data":"6a3167aabfdbaeb9a39dd02415677d240e1997d1df9fa884cccff5b8dce7d89f"} Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.969495 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3167aabfdbaeb9a39dd02415677d240e1997d1df9fa884cccff5b8dce7d89f" Feb 18 14:15:53 crc kubenswrapper[4739]: I0218 14:15:53.969606 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.760708 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc"] Feb 18 14:15:55 crc kubenswrapper[4739]: E0218 14:15:55.761269 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="util" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761285 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="util" Feb 18 14:15:55 crc kubenswrapper[4739]: E0218 14:15:55.761301 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="pull" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761306 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="pull" Feb 18 14:15:55 crc kubenswrapper[4739]: E0218 14:15:55.761325 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="extract" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761331 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="extract" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.761495 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90" containerName="extract" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.762011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.768902 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-c9zzv" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.805153 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc"] Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.835457 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4sm\" (UniqueName: \"kubernetes.io/projected/8bf4ed0a-8055-462b-9324-1fa1c4f429b1-kube-api-access-ds4sm\") pod \"openstack-operator-controller-init-5864f6ff6b-7n5hc\" (UID: \"8bf4ed0a-8055-462b-9324-1fa1c4f429b1\") " pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.937813 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4sm\" (UniqueName: \"kubernetes.io/projected/8bf4ed0a-8055-462b-9324-1fa1c4f429b1-kube-api-access-ds4sm\") pod \"openstack-operator-controller-init-5864f6ff6b-7n5hc\" (UID: \"8bf4ed0a-8055-462b-9324-1fa1c4f429b1\") " pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:55 crc kubenswrapper[4739]: I0218 14:15:55.962689 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4sm\" (UniqueName: \"kubernetes.io/projected/8bf4ed0a-8055-462b-9324-1fa1c4f429b1-kube-api-access-ds4sm\") pod \"openstack-operator-controller-init-5864f6ff6b-7n5hc\" (UID: \"8bf4ed0a-8055-462b-9324-1fa1c4f429b1\") " pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:56 crc kubenswrapper[4739]: I0218 14:15:56.082139 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:15:56 crc kubenswrapper[4739]: I0218 14:15:56.690746 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc"] Feb 18 14:15:57 crc kubenswrapper[4739]: I0218 14:15:57.000181 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" event={"ID":"8bf4ed0a-8055-462b-9324-1fa1c4f429b1","Type":"ContainerStarted","Data":"4b35c274b1e3a6ef1488630d4737649bac48378f7878ea6c2aaf192f7166ee92"} Feb 18 14:16:02 crc kubenswrapper[4739]: I0218 14:16:02.057734 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" event={"ID":"8bf4ed0a-8055-462b-9324-1fa1c4f429b1","Type":"ContainerStarted","Data":"5759fd8109936917e6ed4c7e129fd30005aaf4dcfe90f5c9e8acb4c336baff58"} Feb 18 14:16:02 crc kubenswrapper[4739]: I0218 14:16:02.059297 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:16:02 crc kubenswrapper[4739]: I0218 14:16:02.092831 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" podStartSLOduration=2.135728294 podStartE2EDuration="7.09281354s" podCreationTimestamp="2026-02-18 14:15:55 +0000 UTC" firstStartedPulling="2026-02-18 14:15:56.701422261 +0000 UTC m=+989.197143183" lastFinishedPulling="2026-02-18 14:16:01.658507507 +0000 UTC m=+994.154228429" observedRunningTime="2026-02-18 14:16:02.085293514 +0000 UTC m=+994.581014456" watchObservedRunningTime="2026-02-18 14:16:02.09281354 +0000 UTC m=+994.588534452" Feb 18 14:16:06 crc kubenswrapper[4739]: I0218 14:16:06.085534 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.295085 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.296813 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.299350 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-p9bbp" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.306206 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.307332 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.311827 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vkk6d" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.312594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxlv\" (UniqueName: \"kubernetes.io/projected/61bc4b17-baf6-435c-9280-b97fcede913c-kube-api-access-4fxlv\") pod \"barbican-operator-controller-manager-868647ff47-knpz9\" (UID: \"61bc4b17-baf6-435c-9280-b97fcede913c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.323957 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.339809 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.341082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.349384 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sx45f" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.381060 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.397413 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.416518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhfm\" (UniqueName: \"kubernetes.io/projected/c8f419fe-23b1-4a93-97fe-05071df32425-kube-api-access-zxhfm\") pod \"designate-operator-controller-manager-6d8bf5c495-47445\" (UID: \"c8f419fe-23b1-4a93-97fe-05071df32425\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.416624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcbfw\" (UniqueName: \"kubernetes.io/projected/d617f67f-2577-418f-a367-42c366c17980-kube-api-access-bcbfw\") pod \"cinder-operator-controller-manager-5d946d989d-b9hds\" (UID: \"d617f67f-2577-418f-a367-42c366c17980\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.416750 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxlv\" (UniqueName: \"kubernetes.io/projected/61bc4b17-baf6-435c-9280-b97fcede913c-kube-api-access-4fxlv\") pod \"barbican-operator-controller-manager-868647ff47-knpz9\" (UID: \"61bc4b17-baf6-435c-9280-b97fcede913c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.417168 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.418492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.420340 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vsf2g" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.453398 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.466279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxlv\" (UniqueName: \"kubernetes.io/projected/61bc4b17-baf6-435c-9280-b97fcede913c-kube-api-access-4fxlv\") pod \"barbican-operator-controller-manager-868647ff47-knpz9\" (UID: \"61bc4b17-baf6-435c-9280-b97fcede913c\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.481657 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-m469j"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.482724 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.492852 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-m469j"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.496574 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-d59wz" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.516638 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-54k4b"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.517644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcbfw\" (UniqueName: \"kubernetes.io/projected/d617f67f-2577-418f-a367-42c366c17980-kube-api-access-bcbfw\") pod \"cinder-operator-controller-manager-5d946d989d-b9hds\" (UID: \"d617f67f-2577-418f-a367-42c366c17980\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518090 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9w2\" (UniqueName: \"kubernetes.io/projected/60bad312-a989-43d1-87e6-6c6f10d1ae8f-kube-api-access-fk9w2\") pod \"heat-operator-controller-manager-69f49c598c-m469j\" (UID: \"60bad312-a989-43d1-87e6-6c6f10d1ae8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518271 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhfm\" (UniqueName: \"kubernetes.io/projected/c8f419fe-23b1-4a93-97fe-05071df32425-kube-api-access-zxhfm\") pod \"designate-operator-controller-manager-6d8bf5c495-47445\" (UID: \"c8f419fe-23b1-4a93-97fe-05071df32425\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.518320 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc7qx\" (UniqueName: \"kubernetes.io/projected/19470a60-c796-4a28-a0e2-65b50fa94ea6-kube-api-access-pc7qx\") pod \"glance-operator-controller-manager-77987464f4-hxdbh\" (UID: \"19470a60-c796-4a28-a0e2-65b50fa94ea6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.524722 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qdwzx" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.524869 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.526591 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.529166 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.534510 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fhpzj" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.549245 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-54k4b"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.574564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcbfw\" (UniqueName: \"kubernetes.io/projected/d617f67f-2577-418f-a367-42c366c17980-kube-api-access-bcbfw\") pod \"cinder-operator-controller-manager-5d946d989d-b9hds\" (UID: \"d617f67f-2577-418f-a367-42c366c17980\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.574965 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.591897 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.593803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.593854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhfm\" (UniqueName: \"kubernetes.io/projected/c8f419fe-23b1-4a93-97fe-05071df32425-kube-api-access-zxhfm\") pod \"designate-operator-controller-manager-6d8bf5c495-47445\" (UID: \"c8f419fe-23b1-4a93-97fe-05071df32425\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.599176 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-495vm" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.618550 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk9w2\" (UniqueName: \"kubernetes.io/projected/60bad312-a989-43d1-87e6-6c6f10d1ae8f-kube-api-access-fk9w2\") pod \"heat-operator-controller-manager-69f49c598c-m469j\" (UID: \"60bad312-a989-43d1-87e6-6c6f10d1ae8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623466 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nrwb\" (UniqueName: \"kubernetes.io/projected/877f7fe3-168f-4b05-a88e-a7a11bf45e36-kube-api-access-5nrwb\") pod \"horizon-operator-controller-manager-5b9b8895d5-xhkdh\" (UID: \"877f7fe3-168f-4b05-a88e-a7a11bf45e36\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623535 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8wh\" (UniqueName: \"kubernetes.io/projected/b1d0315e-6ccb-4c6a-a488-98454bb41358-kube-api-access-gm8wh\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvnb\" (UniqueName: \"kubernetes.io/projected/fb608395-17b5-4b92-a0be-b5abc08ac979-kube-api-access-2kvnb\") pod \"ironic-operator-controller-manager-554564d7fc-hrxn2\" (UID: \"fb608395-17b5-4b92-a0be-b5abc08ac979\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.623611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc7qx\" (UniqueName: \"kubernetes.io/projected/19470a60-c796-4a28-a0e2-65b50fa94ea6-kube-api-access-pc7qx\") pod \"glance-operator-controller-manager-77987464f4-hxdbh\" (UID: \"19470a60-c796-4a28-a0e2-65b50fa94ea6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.625856 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.645098 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.663901 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.685918 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.687686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.694393 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-sg7jb" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.699769 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk9w2\" (UniqueName: \"kubernetes.io/projected/60bad312-a989-43d1-87e6-6c6f10d1ae8f-kube-api-access-fk9w2\") pod \"heat-operator-controller-manager-69f49c598c-m469j\" (UID: \"60bad312-a989-43d1-87e6-6c6f10d1ae8f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.701367 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.702993 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc7qx\" (UniqueName: \"kubernetes.io/projected/19470a60-c796-4a28-a0e2-65b50fa94ea6-kube-api-access-pc7qx\") pod \"glance-operator-controller-manager-77987464f4-hxdbh\" (UID: \"19470a60-c796-4a28-a0e2-65b50fa94ea6\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.726644 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-prt26"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.727905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729277 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nrwb\" (UniqueName: \"kubernetes.io/projected/877f7fe3-168f-4b05-a88e-a7a11bf45e36-kube-api-access-5nrwb\") pod \"horizon-operator-controller-manager-5b9b8895d5-xhkdh\" (UID: \"877f7fe3-168f-4b05-a88e-a7a11bf45e36\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm8wh\" (UniqueName: \"kubernetes.io/projected/b1d0315e-6ccb-4c6a-a488-98454bb41358-kube-api-access-gm8wh\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729434 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvnb\" (UniqueName: \"kubernetes.io/projected/fb608395-17b5-4b92-a0be-b5abc08ac979-kube-api-access-2kvnb\") pod \"ironic-operator-controller-manager-554564d7fc-hrxn2\" (UID: \"fb608395-17b5-4b92-a0be-b5abc08ac979\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.729478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxsd\" (UniqueName: \"kubernetes.io/projected/2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682-kube-api-access-rmxsd\") pod \"keystone-operator-controller-manager-b4d948c87-q4vb2\" (UID: \"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: E0218 14:16:29.729651 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:29 crc kubenswrapper[4739]: E0218 14:16:29.729693 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:30.229675905 +0000 UTC m=+1022.725396817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.734484 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.737262 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.744981 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kj4qq" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.745363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9ntrq" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.766321 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.794841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm8wh\" (UniqueName: \"kubernetes.io/projected/b1d0315e-6ccb-4c6a-a488-98454bb41358-kube-api-access-gm8wh\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.798484 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.799630 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.800862 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvnb\" (UniqueName: \"kubernetes.io/projected/fb608395-17b5-4b92-a0be-b5abc08ac979-kube-api-access-2kvnb\") pod \"ironic-operator-controller-manager-554564d7fc-hrxn2\" (UID: \"fb608395-17b5-4b92-a0be-b5abc08ac979\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.805093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nrwb\" (UniqueName: \"kubernetes.io/projected/877f7fe3-168f-4b05-a88e-a7a11bf45e36-kube-api-access-5nrwb\") pod \"horizon-operator-controller-manager-5b9b8895d5-xhkdh\" (UID: \"877f7fe3-168f-4b05-a88e-a7a11bf45e36\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.807070 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.808809 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.816254 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zkkf6" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.828306 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.828397 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vmp8w" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.834857 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97vmz\" (UniqueName: \"kubernetes.io/projected/209f2e6c-29e9-444b-b14a-10eadb782a59-kube-api-access-97vmz\") pod \"manila-operator-controller-manager-54f6768c69-prt26\" (UID: \"209f2e6c-29e9-444b-b14a-10eadb782a59\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.834964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmxsd\" (UniqueName: \"kubernetes.io/projected/2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682-kube-api-access-rmxsd\") pod \"keystone-operator-controller-manager-b4d948c87-q4vb2\" (UID: \"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.835003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4tfr\" (UniqueName: \"kubernetes.io/projected/92f1b9c3-1bdd-48ca-9a76-68ace2635cf1-kube-api-access-l4tfr\") pod \"mariadb-operator-controller-manager-6994f66f48-8vh65\" (UID: \"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.835032 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4blrs\" (UniqueName: \"kubernetes.io/projected/40be8fff-51f0-467a-aca5-517e02eea23b-kube-api-access-4blrs\") pod \"nova-operator-controller-manager-567668f5cf-rk7x9\" (UID: \"40be8fff-51f0-467a-aca5-517e02eea23b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.835116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcsj\" (UniqueName: \"kubernetes.io/projected/3b114d0a-837c-4f0c-b02a-db694bdab362-kube-api-access-4fcsj\") pod \"neutron-operator-controller-manager-64ddbf8bb-cdt9l\" (UID: \"3b114d0a-837c-4f0c-b02a-db694bdab362\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.844460 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.852082 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-prt26"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.865093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmxsd\" (UniqueName: \"kubernetes.io/projected/2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682-kube-api-access-rmxsd\") pod \"keystone-operator-controller-manager-b4d948c87-q4vb2\" (UID: \"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.903937 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65"] Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.904974 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:16:29 crc kubenswrapper[4739]: I0218 14:16:29.924580 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.014172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97vmz\" (UniqueName: \"kubernetes.io/projected/209f2e6c-29e9-444b-b14a-10eadb782a59-kube-api-access-97vmz\") pod \"manila-operator-controller-manager-54f6768c69-prt26\" (UID: \"209f2e6c-29e9-444b-b14a-10eadb782a59\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.015478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4tfr\" (UniqueName: \"kubernetes.io/projected/92f1b9c3-1bdd-48ca-9a76-68ace2635cf1-kube-api-access-l4tfr\") pod \"mariadb-operator-controller-manager-6994f66f48-8vh65\" (UID: \"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.015686 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4blrs\" (UniqueName: \"kubernetes.io/projected/40be8fff-51f0-467a-aca5-517e02eea23b-kube-api-access-4blrs\") pod \"nova-operator-controller-manager-567668f5cf-rk7x9\" (UID: \"40be8fff-51f0-467a-aca5-517e02eea23b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.019409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcsj\" (UniqueName: \"kubernetes.io/projected/3b114d0a-837c-4f0c-b02a-db694bdab362-kube-api-access-4fcsj\") pod \"neutron-operator-controller-manager-64ddbf8bb-cdt9l\" (UID: \"3b114d0a-837c-4f0c-b02a-db694bdab362\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.069157 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcsj\" (UniqueName: \"kubernetes.io/projected/3b114d0a-837c-4f0c-b02a-db694bdab362-kube-api-access-4fcsj\") pod \"neutron-operator-controller-manager-64ddbf8bb-cdt9l\" (UID: \"3b114d0a-837c-4f0c-b02a-db694bdab362\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.074468 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4blrs\" (UniqueName: \"kubernetes.io/projected/40be8fff-51f0-467a-aca5-517e02eea23b-kube-api-access-4blrs\") pod \"nova-operator-controller-manager-567668f5cf-rk7x9\" (UID: \"40be8fff-51f0-467a-aca5-517e02eea23b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.082391 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.086303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97vmz\" (UniqueName: \"kubernetes.io/projected/209f2e6c-29e9-444b-b14a-10eadb782a59-kube-api-access-97vmz\") pod \"manila-operator-controller-manager-54f6768c69-prt26\" (UID: \"209f2e6c-29e9-444b-b14a-10eadb782a59\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.090344 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4tfr\" (UniqueName: \"kubernetes.io/projected/92f1b9c3-1bdd-48ca-9a76-68ace2635cf1-kube-api-access-l4tfr\") pod \"mariadb-operator-controller-manager-6994f66f48-8vh65\" (UID: \"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.094433 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.107280 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.142046 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.145209 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.151501 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lw5tm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.152742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.199646 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.205316 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.212205 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tgrn" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.219909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.241130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97z2n\" (UniqueName: \"kubernetes.io/projected/d34f7233-92b8-4803-ab81-0da45a4de925-kube-api-access-97z2n\") pod \"octavia-operator-controller-manager-69f8888797-4f4zc\" (UID: \"d34f7233-92b8-4803-ab81-0da45a4de925\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.241188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s82qv\" (UniqueName: \"kubernetes.io/projected/e19083b1-791a-4549-b64e-0bb0032abad2-kube-api-access-s82qv\") pod \"placement-operator-controller-manager-8497b45c89-lmvdv\" (UID: \"e19083b1-791a-4549-b64e-0bb0032abad2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.241302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.241513 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.245985 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.24595285 +0000 UTC m=+1023.741673792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.277572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.292737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.303294 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.303725 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.304836 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.308425 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5c4xb" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.323367 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.326606 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.330811 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wmmgv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.332268 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.334145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv5m5\" (UniqueName: \"kubernetes.io/projected/52927612-b074-4573-aa63-41cbb1d704bf-kube-api-access-mv5m5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97z2n\" (UniqueName: \"kubernetes.io/projected/d34f7233-92b8-4803-ab81-0da45a4de925-kube-api-access-97z2n\") pod \"octavia-operator-controller-manager-69f8888797-4f4zc\" (UID: \"d34f7233-92b8-4803-ab81-0da45a4de925\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342660 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s82qv\" (UniqueName: \"kubernetes.io/projected/e19083b1-791a-4549-b64e-0bb0032abad2-kube-api-access-s82qv\") pod \"placement-operator-controller-manager-8497b45c89-lmvdv\" (UID: \"e19083b1-791a-4549-b64e-0bb0032abad2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342735 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.342767 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbplc\" (UniqueName: \"kubernetes.io/projected/8336a5f7-2ff0-440a-88b0-a6ab51692965-kube-api-access-dbplc\") pod \"ovn-operator-controller-manager-d44cf6b75-4lkbs\" (UID: \"8336a5f7-2ff0-440a-88b0-a6ab51692965\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.346519 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.348436 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.350904 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.352498 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-l9xfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.365305 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97z2n\" (UniqueName: \"kubernetes.io/projected/d34f7233-92b8-4803-ab81-0da45a4de925-kube-api-access-97z2n\") pod \"octavia-operator-controller-manager-69f8888797-4f4zc\" (UID: \"d34f7233-92b8-4803-ab81-0da45a4de925\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.369103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s82qv\" (UniqueName: \"kubernetes.io/projected/e19083b1-791a-4549-b64e-0bb0032abad2-kube-api-access-s82qv\") pod \"placement-operator-controller-manager-8497b45c89-lmvdv\" (UID: \"e19083b1-791a-4549-b64e-0bb0032abad2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.405853 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.407237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.410395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8w85g" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444457 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6z2d\" (UniqueName: \"kubernetes.io/projected/538f0d59-9eea-4f76-a310-f7f724593a1e-kube-api-access-f6z2d\") pod \"telemetry-operator-controller-manager-6956d67c5c-52bt7\" (UID: \"538f0d59-9eea-4f76-a310-f7f724593a1e\") " pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444587 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbplc\" (UniqueName: \"kubernetes.io/projected/8336a5f7-2ff0-440a-88b0-a6ab51692965-kube-api-access-dbplc\") pod \"ovn-operator-controller-manager-d44cf6b75-4lkbs\" (UID: \"8336a5f7-2ff0-440a-88b0-a6ab51692965\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.444644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27nb\" (UniqueName: \"kubernetes.io/projected/ac911184-3930-4f7e-9d77-2cc9e7262ea6-kube-api-access-b27nb\") pod \"swift-operator-controller-manager-68f46476f-s7fsm\" (UID: \"ac911184-3930-4f7e-9d77-2cc9e7262ea6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.444728 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.444896 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:30.944770282 +0000 UTC m=+1023.440491204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.445290 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv5m5\" (UniqueName: \"kubernetes.io/projected/52927612-b074-4573-aa63-41cbb1d704bf-kube-api-access-mv5m5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.479762 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbplc\" (UniqueName: \"kubernetes.io/projected/8336a5f7-2ff0-440a-88b0-a6ab51692965-kube-api-access-dbplc\") pod \"ovn-operator-controller-manager-d44cf6b75-4lkbs\" (UID: \"8336a5f7-2ff0-440a-88b0-a6ab51692965\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.483593 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.483634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.483656 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jblfh"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.484217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv5m5\" (UniqueName: \"kubernetes.io/projected/52927612-b074-4573-aa63-41cbb1d704bf-kube-api-access-mv5m5\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.485809 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jblfh"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.485843 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.486127 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.487427 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.487840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.489608 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kslv7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.489653 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f4lgz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.497731 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.502104 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.506508 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.506578 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.515788 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4dh9w" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.518779 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559224 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6z2d\" (UniqueName: \"kubernetes.io/projected/538f0d59-9eea-4f76-a310-f7f724593a1e-kube-api-access-f6z2d\") pod \"telemetry-operator-controller-manager-6956d67c5c-52bt7\" (UID: \"538f0d59-9eea-4f76-a310-f7f724593a1e\") " pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79sj5\" (UniqueName: \"kubernetes.io/projected/8add2ed9-6416-4e9f-a3a1-f8a615962850-kube-api-access-79sj5\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559382 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncr8j\" (UniqueName: \"kubernetes.io/projected/6741b4b4-1817-4639-bdf6-b5be2729a1fa-kube-api-access-ncr8j\") pod \"test-operator-controller-manager-7866795846-jblfh\" (UID: \"6741b4b4-1817-4639-bdf6-b5be2729a1fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559418 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27nb\" (UniqueName: \"kubernetes.io/projected/ac911184-3930-4f7e-9d77-2cc9e7262ea6-kube-api-access-b27nb\") pod \"swift-operator-controller-manager-68f46476f-s7fsm\" (UID: \"ac911184-3930-4f7e-9d77-2cc9e7262ea6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559502 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.559534 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz24r\" (UniqueName: \"kubernetes.io/projected/caed7b7d-66db-4bd9-ba33-efc5f3951069-kube-api-access-gz24r\") pod \"watcher-operator-controller-manager-5db88f68c-kssdd\" (UID: \"caed7b7d-66db-4bd9-ba33-efc5f3951069\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.586736 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27nb\" (UniqueName: \"kubernetes.io/projected/ac911184-3930-4f7e-9d77-2cc9e7262ea6-kube-api-access-b27nb\") pod \"swift-operator-controller-manager-68f46476f-s7fsm\" (UID: \"ac911184-3930-4f7e-9d77-2cc9e7262ea6\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.593274 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6z2d\" (UniqueName: \"kubernetes.io/projected/538f0d59-9eea-4f76-a310-f7f724593a1e-kube-api-access-f6z2d\") pod \"telemetry-operator-controller-manager-6956d67c5c-52bt7\" (UID: \"538f0d59-9eea-4f76-a310-f7f724593a1e\") " pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.599895 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.601637 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.609497 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.619991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8k5vx" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.621027 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.621665 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.669065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.670889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79sj5\" (UniqueName: \"kubernetes.io/projected/8add2ed9-6416-4e9f-a3a1-f8a615962850-kube-api-access-79sj5\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.670945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncr8j\" (UniqueName: \"kubernetes.io/projected/6741b4b4-1817-4639-bdf6-b5be2729a1fa-kube-api-access-ncr8j\") pod \"test-operator-controller-manager-7866795846-jblfh\" (UID: \"6741b4b4-1817-4639-bdf6-b5be2729a1fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz24r\" (UniqueName: \"kubernetes.io/projected/caed7b7d-66db-4bd9-ba33-efc5f3951069-kube-api-access-gz24r\") pod \"watcher-operator-controller-manager-5db88f68c-kssdd\" (UID: \"caed7b7d-66db-4bd9-ba33-efc5f3951069\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671152 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8j56\" (UniqueName: \"kubernetes.io/projected/06163b75-4f40-42a0-83d8-70c935b9172c-kube-api-access-n8j56\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gszz\" (UID: \"06163b75-4f40-42a0-83d8-70c935b9172c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.671188 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.671405 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.671473 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.171457465 +0000 UTC m=+1023.667178387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.675138 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.675227 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.175199378 +0000 UTC m=+1023.670920300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.723345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncr8j\" (UniqueName: \"kubernetes.io/projected/6741b4b4-1817-4639-bdf6-b5be2729a1fa-kube-api-access-ncr8j\") pod \"test-operator-controller-manager-7866795846-jblfh\" (UID: \"6741b4b4-1817-4639-bdf6-b5be2729a1fa\") " pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.724136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz24r\" (UniqueName: \"kubernetes.io/projected/caed7b7d-66db-4bd9-ba33-efc5f3951069-kube-api-access-gz24r\") pod \"watcher-operator-controller-manager-5db88f68c-kssdd\" (UID: \"caed7b7d-66db-4bd9-ba33-efc5f3951069\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.730734 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79sj5\" (UniqueName: \"kubernetes.io/projected/8add2ed9-6416-4e9f-a3a1-f8a615962850-kube-api-access-79sj5\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.772678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8j56\" (UniqueName: \"kubernetes.io/projected/06163b75-4f40-42a0-83d8-70c935b9172c-kube-api-access-n8j56\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gszz\" (UID: \"06163b75-4f40-42a0-83d8-70c935b9172c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.812121 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8j56\" (UniqueName: \"kubernetes.io/projected/06163b75-4f40-42a0-83d8-70c935b9172c-kube-api-access-n8j56\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gszz\" (UID: \"06163b75-4f40-42a0-83d8-70c935b9172c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.917653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9"] Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.951115 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:16:30 crc kubenswrapper[4739]: I0218 14:16:30.976031 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.976497 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:30 crc kubenswrapper[4739]: E0218 14:16:30.976590 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:31.976565683 +0000 UTC m=+1024.472286605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.000620 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.034621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.060392 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.116393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.180752 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.181313 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.182817 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.182928 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:32.182892631 +0000 UTC m=+1024.678613553 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.185203 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.185269 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:32.185246669 +0000 UTC m=+1024.680967591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.291594 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.291849 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: E0218 14:16:31.292742 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:33.292718145 +0000 UTC m=+1025.788439067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.329713 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" event={"ID":"61bc4b17-baf6-435c-9280-b97fcede913c","Type":"ContainerStarted","Data":"a1c3c3936aa497548c575e3a1dd2edd60e8994a617cf2d4c16c313b197d47d43"} Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.573823 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds"] Feb 18 14:16:31 crc kubenswrapper[4739]: W0218 14:16:31.596343 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd617f67f_2577_418f_a367_42c366c17980.slice/crio-22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f WatchSource:0}: Error finding container 22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f: Status 404 returned error can't find the container with id 22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.603576 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445"] Feb 18 14:16:31 crc kubenswrapper[4739]: I0218 14:16:31.655945 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.014360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.014649 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.014717 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:34.014699462 +0000 UTC m=+1026.510420384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.218473 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.218661 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218749 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218832 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:34.218809095 +0000 UTC m=+1026.714530077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218849 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: E0218 14:16:32.218908 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:34.218891587 +0000 UTC m=+1026.714612589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.317254 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9"] Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.322385 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40be8fff_51f0_467a_aca5_517e02eea23b.slice/crio-79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0 WatchSource:0}: Error finding container 79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0: Status 404 returned error can't find the container with id 79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0 Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.340249 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" event={"ID":"40be8fff-51f0-467a-aca5-517e02eea23b","Type":"ContainerStarted","Data":"79d09b0a588fe7993649ca20283cd1f834a79b84ba84d81bb04ca7735d3e5fc0"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.341723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" event={"ID":"c8f419fe-23b1-4a93-97fe-05071df32425","Type":"ContainerStarted","Data":"acb0115df78d85a449936d5c1c52b22ebb4e7bcb5fbaaae49254abad2a861fe8"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.342655 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" event={"ID":"d617f67f-2577-418f-a367-42c366c17980","Type":"ContainerStarted","Data":"22770fd641dff190dae2addaee280dc660f53860667069f30bd6cd33fd8da78f"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.343495 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" event={"ID":"19470a60-c796-4a28-a0e2-65b50fa94ea6","Type":"ContainerStarted","Data":"84c39f8d9461fbebb201c04a60ce41eca031946d8167b261a6a5533899ecd27e"} Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.525426 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l"] Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.536472 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e8e2d9d_fbfe_409e_bf3e_ea47e48e1682.slice/crio-31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5 WatchSource:0}: Error finding container 31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5: Status 404 returned error can't find the container with id 31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5 Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.537872 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.545272 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-m469j"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.552188 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.559622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-prt26"] Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.567130 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh"] Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.592898 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b114d0a_837c_4f0c_b02a_db694bdab362.slice/crio-888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319 WatchSource:0}: Error finding container 888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319: Status 404 returned error can't find the container with id 888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319 Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.598797 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod209f2e6c_29e9_444b_b14a_10eadb782a59.slice/crio-8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827 WatchSource:0}: Error finding container 8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827: Status 404 returned error can't find the container with id 8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827 Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.612620 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60bad312_a989_43d1_87e6_6c6f10d1ae8f.slice/crio-91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba WatchSource:0}: Error finding container 91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba: Status 404 returned error can't find the container with id 91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.617582 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877f7fe3_168f_4b05_a88e_a7a11bf45e36.slice/crio-d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f WatchSource:0}: Error finding container d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f: Status 404 returned error can't find the container with id d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f Feb 18 14:16:32 crc kubenswrapper[4739]: W0218 14:16:32.971807 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8336a5f7_2ff0_440a_88b0_a6ab51692965.slice/crio-e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11 WatchSource:0}: Error finding container e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11: Status 404 returned error can't find the container with id e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11 Feb 18 14:16:32 crc kubenswrapper[4739]: I0218 14:16:32.978155 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs"] Feb 18 14:16:33 crc kubenswrapper[4739]: W0218 14:16:33.029558 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode19083b1_791a_4549_b64e_0bb0032abad2.slice/crio-752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6 WatchSource:0}: Error finding container 752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6: Status 404 returned error can't find the container with id 752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6 Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.058964 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.075543 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65"] Feb 18 14:16:33 crc kubenswrapper[4739]: W0218 14:16:33.081331 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac911184_3930_4f7e_9d77_2cc9e7262ea6.slice/crio-516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08 WatchSource:0}: Error finding container 516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08: Status 404 returned error can't find the container with id 516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08 Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.092308 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.099176 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.110624 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm"] Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.311610 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7"] Feb 18 14:16:33 crc kubenswrapper[4739]: W0218 14:16:33.334887 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod538f0d59_9eea_4f76_a310_f7f724593a1e.slice/crio-faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607 WatchSource:0}: Error finding container faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607: Status 404 returned error can't find the container with id faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607 Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.357528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" event={"ID":"8336a5f7-2ff0-440a-88b0-a6ab51692965","Type":"ContainerStarted","Data":"e1771e1732730be8cb8cf044407cd36120c251d2d5701ec397aac45239719b11"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.360768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" event={"ID":"3b114d0a-837c-4f0c-b02a-db694bdab362","Type":"ContainerStarted","Data":"888496df5109b2d716df14dcead5bd4978c3daad7bc8d10848f0503fc3f8e319"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.361789 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" event={"ID":"877f7fe3-168f-4b05-a88e-a7a11bf45e36","Type":"ContainerStarted","Data":"d42b4603998d0d3a2a664bb8963f0e5f961c09a6822d56605b9dd83bb038e78f"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.364349 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" event={"ID":"e19083b1-791a-4549-b64e-0bb0032abad2","Type":"ContainerStarted","Data":"752e3908bde0a5934cc23be0c78b460041bcd58ae0ca49f5991fa40d41f82df6"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.372542 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" event={"ID":"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1","Type":"ContainerStarted","Data":"fb40c1410d26319eb159847449fb9ae482c108aa746e969b93fbd85bbc0434ba"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.375054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerStarted","Data":"8ca939195772b46bdcc61b173814f4d1ea27b68e239e08817e9265fb0211513f"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.378649 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" event={"ID":"60bad312-a989-43d1-87e6-6c6f10d1ae8f","Type":"ContainerStarted","Data":"91ea96e7716fac20ee0702651532318608f23addcc6c03ccaba047bb76f076ba"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.379772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" event={"ID":"ac911184-3930-4f7e-9d77-2cc9e7262ea6","Type":"ContainerStarted","Data":"516032e4def080cf6595023255aa61d5b8db081d33c65fe677d69f3854c58c08"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.380755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" event={"ID":"caed7b7d-66db-4bd9-ba33-efc5f3951069","Type":"ContainerStarted","Data":"a65a75e9097a4778ddcd5c4d75982228aba4b618eec253fd1189dbbcd46fe452"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.381697 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" event={"ID":"538f0d59-9eea-4f76-a310-f7f724593a1e","Type":"ContainerStarted","Data":"faa6fe2d6cb7661a7eb6d912ab878c5bec659aaa8c9777f6eb540a35c068a607"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.382432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerStarted","Data":"4034c4d24ef9e1a0430cc9101561e1de57649244954346e3ddec6d84a716c7ac"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.385676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" event={"ID":"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682","Type":"ContainerStarted","Data":"31bdf40e6391b2656e9020a2612d49923fb24b7b79eec2611f1e15169de57bb5"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.386978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" event={"ID":"209f2e6c-29e9-444b-b14a-10eadb782a59","Type":"ContainerStarted","Data":"8e77f0a6a82a4e6aeefc1aafeba9610b2c1d18bf0813a8e2f1312cdb9c53e827"} Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.389517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.389654 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.389729 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:37.389710798 +0000 UTC m=+1029.885431720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.461971 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz"] Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.470814 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ncr8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-jblfh_openstack-operators(6741b4b4-1817-4639-bdf6-b5be2729a1fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 14:16:33 crc kubenswrapper[4739]: I0218 14:16:33.470992 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-jblfh"] Feb 18 14:16:33 crc kubenswrapper[4739]: E0218 14:16:33.472792 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.102909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.103111 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.103612 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:38.103586305 +0000 UTC m=+1030.599307227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.307189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307421 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.307473 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307558 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:38.307535074 +0000 UTC m=+1030.803256196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307634 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.307780 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:38.307744959 +0000 UTC m=+1030.803466081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.398282 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" event={"ID":"06163b75-4f40-42a0-83d8-70c935b9172c","Type":"ContainerStarted","Data":"72a837e466540fb33dc740a4e15d77d26716e20825fb8d345f62d8d560dea7c7"} Feb 18 14:16:34 crc kubenswrapper[4739]: I0218 14:16:34.407253 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerStarted","Data":"8a3c46d16c5456d759f7d03f158bf8e868b5cf3eeb0e970f3d7a255e6772bf42"} Feb 18 14:16:34 crc kubenswrapper[4739]: E0218 14:16:34.408876 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:16:35 crc kubenswrapper[4739]: E0218 14:16:35.422912 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:16:37 crc kubenswrapper[4739]: I0218 14:16:37.398884 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:37 crc kubenswrapper[4739]: E0218 14:16:37.399332 4739 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:37 crc kubenswrapper[4739]: E0218 14:16:37.399474 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert podName:b1d0315e-6ccb-4c6a-a488-98454bb41358 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:45.399453856 +0000 UTC m=+1037.895174778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert") pod "infra-operator-controller-manager-79d975b745-54k4b" (UID: "b1d0315e-6ccb-4c6a-a488-98454bb41358") : secret "infra-operator-webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: I0218 14:16:38.117810 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.118485 4739 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.118548 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert podName:52927612-b074-4573-aa63-41cbb1d704bf nodeName:}" failed. No retries permitted until 2026-02-18 14:16:46.118529132 +0000 UTC m=+1038.614250054 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" (UID: "52927612-b074-4573-aa63-41cbb1d704bf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: I0218 14:16:38.321954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:38 crc kubenswrapper[4739]: I0218 14:16:38.322186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.322437 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.322516 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:46.322496691 +0000 UTC m=+1038.818217613 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.322995 4739 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 14:16:38 crc kubenswrapper[4739]: E0218 14:16:38.323047 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:16:46.323036555 +0000 UTC m=+1038.818757477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "metrics-server-cert" not found Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.403182 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.403715 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fxlv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-knpz9_openstack-operators(61bc4b17-baf6-435c-9280-b97fcede913c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.405563 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" Feb 18 14:16:44 crc kubenswrapper[4739]: E0218 14:16:44.552576 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" Feb 18 14:16:45 crc kubenswrapper[4739]: I0218 14:16:45.459910 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:45 crc kubenswrapper[4739]: I0218 14:16:45.472932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1d0315e-6ccb-4c6a-a488-98454bb41358-cert\") pod \"infra-operator-controller-manager-79d975b745-54k4b\" (UID: \"b1d0315e-6ccb-4c6a-a488-98454bb41358\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:45 crc kubenswrapper[4739]: I0218 14:16:45.582507 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.173915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.180294 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52927612-b074-4573-aa63-41cbb1d704bf-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl\" (UID: \"52927612-b074-4573-aa63-41cbb1d704bf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.377999 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.378110 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:46 crc kubenswrapper[4739]: E0218 14:16:46.378139 4739 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 14:16:46 crc kubenswrapper[4739]: E0218 14:16:46.378204 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs podName:8add2ed9-6416-4e9f-a3a1-f8a615962850 nodeName:}" failed. No retries permitted until 2026-02-18 14:17:02.378186214 +0000 UTC m=+1054.873907136 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs") pod "openstack-operator-controller-manager-7954588dd9-trg52" (UID: "8add2ed9-6416-4e9f-a3a1-f8a615962850") : secret "webhook-server-cert" not found Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.382666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-metrics-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:16:46 crc kubenswrapper[4739]: I0218 14:16:46.439061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.528346 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.528749 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcbfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-b9hds_openstack-operators(d617f67f-2577-418f-a367-42c366c17980): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.529984 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" Feb 18 14:16:48 crc kubenswrapper[4739]: E0218 14:16:48.584160 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" Feb 18 14:16:49 crc kubenswrapper[4739]: E0218 14:16:49.643943 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 18 14:16:49 crc kubenswrapper[4739]: E0218 14:16:49.644414 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97vmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-prt26_openstack-operators(209f2e6c-29e9-444b-b14a-10eadb782a59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:49 crc kubenswrapper[4739]: E0218 14:16:49.645578 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podUID="209f2e6c-29e9-444b-b14a-10eadb782a59" Feb 18 14:16:50 crc kubenswrapper[4739]: E0218 14:16:50.607718 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podUID="209f2e6c-29e9-444b-b14a-10eadb782a59" Feb 18 14:16:51 crc kubenswrapper[4739]: E0218 14:16:51.995592 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 18 14:16:51 crc kubenswrapper[4739]: E0218 14:16:51.996166 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxhfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-47445_openstack-operators(c8f419fe-23b1-4a93-97fe-05071df32425): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:51 crc kubenswrapper[4739]: E0218 14:16:51.998319 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" Feb 18 14:16:52 crc kubenswrapper[4739]: E0218 14:16:52.619701 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" Feb 18 14:16:54 crc kubenswrapper[4739]: E0218 14:16:54.705618 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 18 14:16:54 crc kubenswrapper[4739]: E0218 14:16:54.706168 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5nrwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-xhkdh_openstack-operators(877f7fe3-168f-4b05-a88e-a7a11bf45e36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:54 crc kubenswrapper[4739]: E0218 14:16:54.707714 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.534162 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.534347 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fcsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-cdt9l_openstack-operators(3b114d0a-837c-4f0c-b02a-db694bdab362): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.535700 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.657416 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" Feb 18 14:16:55 crc kubenswrapper[4739]: E0218 14:16:55.657497 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.246488 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.246717 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gz24r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-kssdd_openstack-operators(caed7b7d-66db-4bd9-ba33-efc5f3951069): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.247917 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" Feb 18 14:16:56 crc kubenswrapper[4739]: E0218 14:16:56.662085 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.638496 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.638735 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l4tfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-8vh65_openstack-operators(92f1b9c3-1bdd-48ca-9a76-68ace2635cf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.641221 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" Feb 18 14:16:57 crc kubenswrapper[4739]: E0218 14:16:57.669543 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" Feb 18 14:16:59 crc kubenswrapper[4739]: I0218 14:16:59.372871 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:16:59 crc kubenswrapper[4739]: I0218 14:16:59.373300 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.574546 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.575172 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b27nb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-s7fsm_openstack-operators(ac911184-3930-4f7e-9d77-2cc9e7262ea6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.576592 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" Feb 18 14:17:00 crc kubenswrapper[4739]: E0218 14:17:00.697067 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" Feb 18 14:17:02 crc kubenswrapper[4739]: I0218 14:17:02.415205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:02 crc kubenswrapper[4739]: I0218 14:17:02.425221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8add2ed9-6416-4e9f-a3a1-f8a615962850-webhook-certs\") pod \"openstack-operator-controller-manager-7954588dd9-trg52\" (UID: \"8add2ed9-6416-4e9f-a3a1-f8a615962850\") " pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:02 crc kubenswrapper[4739]: I0218 14:17:02.579905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.245856 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.246314 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pc7qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-hxdbh_openstack-operators(19470a60-c796-4a28-a0e2-65b50fa94ea6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.247796 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" Feb 18 14:17:04 crc kubenswrapper[4739]: E0218 14:17:04.726590 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.370772 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.371029 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbplc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-4lkbs_openstack-operators(8336a5f7-2ff0-440a-88b0-a6ab51692965): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.372290 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podUID="8336a5f7-2ff0-440a-88b0-a6ab51692965" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.735521 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podUID="8336a5f7-2ff0-440a-88b0-a6ab51692965" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.879156 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.879380 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk9w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-m469j_openstack-operators(60bad312-a989-43d1-87e6-6c6f10d1ae8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:05 crc kubenswrapper[4739]: E0218 14:17:05.880692 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.506131 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.506562 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97z2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-4f4zc_openstack-operators(d34f7233-92b8-4803-ab81-0da45a4de925): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.507789 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.743879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" Feb 18 14:17:06 crc kubenswrapper[4739]: E0218 14:17:06.744085 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.607349 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.608896 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4blrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-rk7x9_openstack-operators(40be8fff-51f0-467a-aca5-517e02eea23b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.610264 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.703679 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.703749 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.703911 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6z2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6956d67c5c-52bt7_openstack-operators(538f0d59-9eea-4f76-a310-f7f724593a1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.705126 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.758862 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" Feb 18 14:17:08 crc kubenswrapper[4739]: E0218 14:17:08.759107 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.147:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.320321 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.320602 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmxsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-q4vb2_openstack-operators(2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.323295 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" Feb 18 14:17:09 crc kubenswrapper[4739]: E0218 14:17:09.765828 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.099723 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.099972 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ncr8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-jblfh_openstack-operators(6741b4b4-1817-4639-bdf6-b5be2729a1fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.101203 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.530800 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.531586 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8j56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7gszz_openstack-operators(06163b75-4f40-42a0-83d8-70c935b9172c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.532906 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" podUID="06163b75-4f40-42a0-83d8-70c935b9172c" Feb 18 14:17:10 crc kubenswrapper[4739]: E0218 14:17:10.807054 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" podUID="06163b75-4f40-42a0-83d8-70c935b9172c" Feb 18 14:17:10 crc kubenswrapper[4739]: I0218 14:17:10.980755 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52"] Feb 18 14:17:11 crc kubenswrapper[4739]: W0218 14:17:11.283763 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52927612_b074_4573_aa63_41cbb1d704bf.slice/crio-7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c WatchSource:0}: Error finding container 7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c: Status 404 returned error can't find the container with id 7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.289374 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl"] Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.377684 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-54k4b"] Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.814480 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" event={"ID":"877f7fe3-168f-4b05-a88e-a7a11bf45e36","Type":"ContainerStarted","Data":"371534f04aace7c53c2469bbbae9b5ced744e16ce26172792563ecd694b4570a"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.815787 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.821137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" event={"ID":"c8f419fe-23b1-4a93-97fe-05071df32425","Type":"ContainerStarted","Data":"4406471c71e4a2933549ab100f973cba46a0995206aef2a7133eeb9f42b27c4c"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.821823 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.823067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerStarted","Data":"7620fb3ed0529cab0da3c0f659b8f1b47ed2e65369328e05b78428d53064c63c"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.839365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" event={"ID":"e19083b1-791a-4549-b64e-0bb0032abad2","Type":"ContainerStarted","Data":"8fb2e79aa6360d6a5d350a553c0eadfbb0bdcf8fab1a2e66d211fa6472457468"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.840325 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.846864 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podStartSLOduration=4.567263562 podStartE2EDuration="42.846835088s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.633266585 +0000 UTC m=+1025.128987507" lastFinishedPulling="2026-02-18 14:17:10.912838101 +0000 UTC m=+1063.408559033" observedRunningTime="2026-02-18 14:17:11.836869641 +0000 UTC m=+1064.332590583" watchObservedRunningTime="2026-02-18 14:17:11.846835088 +0000 UTC m=+1064.342556010" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.847243 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" event={"ID":"61bc4b17-baf6-435c-9280-b97fcede913c","Type":"ContainerStarted","Data":"9c95a74b7c5f91247d0f3d3bf78efff2492a323360a862043ad22badf22170c7"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.848634 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.860723 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" event={"ID":"8add2ed9-6416-4e9f-a3a1-f8a615962850","Type":"ContainerStarted","Data":"fb20f6681336822e5b3fde9390c367455cb2527599c8ef33b3dfd4dacb5d5012"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.860786 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" event={"ID":"8add2ed9-6416-4e9f-a3a1-f8a615962850","Type":"ContainerStarted","Data":"136e2e3d2aec2d866f777649ca5ce971a99d184f9a9708b48a7455bd547f4b77"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.863395 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.873956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" event={"ID":"caed7b7d-66db-4bd9-ba33-efc5f3951069","Type":"ContainerStarted","Data":"285c93b9afd3d340ad58d8694787c0f5e2930c20312607e55d96900e5c227db1"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.875169 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.888591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" event={"ID":"d617f67f-2577-418f-a367-42c366c17980","Type":"ContainerStarted","Data":"70fc40e6f6c7263834206245f7aa6fdbc7f676280152de3526726d7fa2c1d246"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.889584 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.895837 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podStartSLOduration=3.708109892 podStartE2EDuration="42.895811363s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:31.622379721 +0000 UTC m=+1024.118100653" lastFinishedPulling="2026-02-18 14:17:10.810081202 +0000 UTC m=+1063.305802124" observedRunningTime="2026-02-18 14:17:11.857524933 +0000 UTC m=+1064.353245855" watchObservedRunningTime="2026-02-18 14:17:11.895811363 +0000 UTC m=+1064.391532285" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.903762 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" event={"ID":"b1d0315e-6ccb-4c6a-a488-98454bb41358","Type":"ContainerStarted","Data":"019a3b57c7d20066dba4b4a096ca0f3a1ce0be4c39737340f2359542e6a19f7e"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.913163 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerStarted","Data":"a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.914912 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.926015 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" event={"ID":"209f2e6c-29e9-444b-b14a-10eadb782a59","Type":"ContainerStarted","Data":"7e811048dfbc56ead3937cec2dfe2257a0fa6bfe212eafd482c1c40c61d7c7ad"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.927359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.935117 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podStartSLOduration=3.359675139 podStartE2EDuration="42.935090757s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:30.988649093 +0000 UTC m=+1023.484370015" lastFinishedPulling="2026-02-18 14:17:10.564064711 +0000 UTC m=+1063.059785633" observedRunningTime="2026-02-18 14:17:11.878120884 +0000 UTC m=+1064.373841806" watchObservedRunningTime="2026-02-18 14:17:11.935090757 +0000 UTC m=+1064.430811689" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.942182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" event={"ID":"3b114d0a-837c-4f0c-b02a-db694bdab362","Type":"ContainerStarted","Data":"eef5964af327ffa966bc134cce5ebd8a7f9bba7dd29db4d1f64ad4224a5ee859"} Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.943476 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.961549 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podStartSLOduration=7.415659863 podStartE2EDuration="42.961527173s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.038612219 +0000 UTC m=+1025.534333141" lastFinishedPulling="2026-02-18 14:17:08.584479529 +0000 UTC m=+1061.080200451" observedRunningTime="2026-02-18 14:17:11.911877151 +0000 UTC m=+1064.407598073" watchObservedRunningTime="2026-02-18 14:17:11.961527173 +0000 UTC m=+1064.457248105" Feb 18 14:17:11 crc kubenswrapper[4739]: I0218 14:17:11.992533 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podStartSLOduration=5.08485234 podStartE2EDuration="42.992509221s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.039239705 +0000 UTC m=+1025.534960627" lastFinishedPulling="2026-02-18 14:17:10.946896586 +0000 UTC m=+1063.442617508" observedRunningTime="2026-02-18 14:17:11.945999458 +0000 UTC m=+1064.441720390" watchObservedRunningTime="2026-02-18 14:17:11.992509221 +0000 UTC m=+1064.488230163" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.028489 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podStartSLOduration=3.9555249569999997 podStartE2EDuration="43.028466573s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:31.616035773 +0000 UTC m=+1024.111756695" lastFinishedPulling="2026-02-18 14:17:10.688977389 +0000 UTC m=+1063.184698311" observedRunningTime="2026-02-18 14:17:11.980710359 +0000 UTC m=+1064.476431281" watchObservedRunningTime="2026-02-18 14:17:12.028466573 +0000 UTC m=+1064.524187495" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.044981 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podStartSLOduration=42.044962592 podStartE2EDuration="42.044962592s" podCreationTimestamp="2026-02-18 14:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:17:12.029124069 +0000 UTC m=+1064.524844991" watchObservedRunningTime="2026-02-18 14:17:12.044962592 +0000 UTC m=+1064.540683514" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.075155 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podStartSLOduration=4.902543198 podStartE2EDuration="43.07512062s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.597686063 +0000 UTC m=+1025.093406985" lastFinishedPulling="2026-02-18 14:17:10.770263485 +0000 UTC m=+1063.265984407" observedRunningTime="2026-02-18 14:17:12.072416573 +0000 UTC m=+1064.568137505" watchObservedRunningTime="2026-02-18 14:17:12.07512062 +0000 UTC m=+1064.570841542" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.097166 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podStartSLOduration=5.135814994 podStartE2EDuration="43.097143657s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.604184964 +0000 UTC m=+1025.099905886" lastFinishedPulling="2026-02-18 14:17:10.565513617 +0000 UTC m=+1063.061234549" observedRunningTime="2026-02-18 14:17:12.092193024 +0000 UTC m=+1064.587913966" watchObservedRunningTime="2026-02-18 14:17:12.097143657 +0000 UTC m=+1064.592864579" Feb 18 14:17:12 crc kubenswrapper[4739]: I0218 14:17:12.137672 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podStartSLOduration=9.193192032 podStartE2EDuration="43.137648931s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.553610029 +0000 UTC m=+1025.049330951" lastFinishedPulling="2026-02-18 14:17:06.498066928 +0000 UTC m=+1058.993787850" observedRunningTime="2026-02-18 14:17:12.131631092 +0000 UTC m=+1064.627352034" watchObservedRunningTime="2026-02-18 14:17:12.137648931 +0000 UTC m=+1064.633369853" Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.971349 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" event={"ID":"92f1b9c3-1bdd-48ca-9a76-68ace2635cf1","Type":"ContainerStarted","Data":"c45220d8814c6c0c18e7cb1262ae8722e0667ae6a5b7a51a97cefb8c990e668f"} Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.972165 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.973707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" event={"ID":"ac911184-3930-4f7e-9d77-2cc9e7262ea6","Type":"ContainerStarted","Data":"94b685f65defd14ff085edc07cf16a6c9eac5af5a9242e2062a105e29adfcadd"} Feb 18 14:17:14 crc kubenswrapper[4739]: I0218 14:17:14.973951 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:17:15 crc kubenswrapper[4739]: I0218 14:17:15.005092 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podStartSLOduration=5.021651883 podStartE2EDuration="46.005067415s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.038941268 +0000 UTC m=+1025.534662190" lastFinishedPulling="2026-02-18 14:17:14.0223568 +0000 UTC m=+1066.518077722" observedRunningTime="2026-02-18 14:17:14.990797391 +0000 UTC m=+1067.486518313" watchObservedRunningTime="2026-02-18 14:17:15.005067415 +0000 UTC m=+1067.500788337" Feb 18 14:17:15 crc kubenswrapper[4739]: I0218 14:17:15.024093 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podStartSLOduration=5.138166282 podStartE2EDuration="46.024071736s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.085721398 +0000 UTC m=+1025.581442320" lastFinishedPulling="2026-02-18 14:17:13.971626852 +0000 UTC m=+1066.467347774" observedRunningTime="2026-02-18 14:17:15.020695502 +0000 UTC m=+1067.516416434" watchObservedRunningTime="2026-02-18 14:17:15.024071736 +0000 UTC m=+1067.519792678" Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.995971 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerStarted","Data":"d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a"} Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.996320 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.998008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" event={"ID":"b1d0315e-6ccb-4c6a-a488-98454bb41358","Type":"ContainerStarted","Data":"309a37ef33b46c2c50248e60dfb4f49973997b9bd9dabd1e8850b219370a129e"} Feb 18 14:17:16 crc kubenswrapper[4739]: I0218 14:17:16.998166 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:17:17 crc kubenswrapper[4739]: I0218 14:17:17.033962 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podStartSLOduration=42.899291999 podStartE2EDuration="48.033935459s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:17:11.285661458 +0000 UTC m=+1063.781382380" lastFinishedPulling="2026-02-18 14:17:16.420304918 +0000 UTC m=+1068.916025840" observedRunningTime="2026-02-18 14:17:17.027687314 +0000 UTC m=+1069.523408236" watchObservedRunningTime="2026-02-18 14:17:17.033935459 +0000 UTC m=+1069.529656401" Feb 18 14:17:17 crc kubenswrapper[4739]: I0218 14:17:17.055114 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podStartSLOduration=43.011492973 podStartE2EDuration="48.055088294s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:17:11.383122006 +0000 UTC m=+1063.878842928" lastFinishedPulling="2026-02-18 14:17:16.426717327 +0000 UTC m=+1068.922438249" observedRunningTime="2026-02-18 14:17:17.054792906 +0000 UTC m=+1069.550513838" watchObservedRunningTime="2026-02-18 14:17:17.055088294 +0000 UTC m=+1069.550809246" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.017991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" event={"ID":"19470a60-c796-4a28-a0e2-65b50fa94ea6","Type":"ContainerStarted","Data":"0e42e02d4125cc15a13435836d7862436df0ae98370b7a452960e4147e247a5c"} Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.629534 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.655060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.668677 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 14:17:19 crc kubenswrapper[4739]: I0218 14:17:19.831020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.029227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerStarted","Data":"056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b"} Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.029363 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.029636 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.045172 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podStartSLOduration=4.912588097 podStartE2EDuration="51.045152819s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.009379674 +0000 UTC m=+1025.505100596" lastFinishedPulling="2026-02-18 14:17:19.141944406 +0000 UTC m=+1071.637665318" observedRunningTime="2026-02-18 14:17:20.042600306 +0000 UTC m=+1072.538321248" watchObservedRunningTime="2026-02-18 14:17:20.045152819 +0000 UTC m=+1072.540873741" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.062832 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podStartSLOduration=3.919557637 podStartE2EDuration="51.062812478s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:31.667207043 +0000 UTC m=+1024.162927965" lastFinishedPulling="2026-02-18 14:17:18.810461884 +0000 UTC m=+1071.306182806" observedRunningTime="2026-02-18 14:17:20.055534607 +0000 UTC m=+1072.551255529" watchObservedRunningTime="2026-02-18 14:17:20.062812478 +0000 UTC m=+1072.558533390" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.100076 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.111180 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.156298 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.307348 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" Feb 18 14:17:20 crc kubenswrapper[4739]: I0218 14:17:20.626303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.006308 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.037305 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.042032 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" event={"ID":"538f0d59-9eea-4f76-a310-f7f724593a1e","Type":"ContainerStarted","Data":"79708c0971e70628d1b238bab729e895a79618886caaafc889eed9311e875037"} Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.042217 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.044347 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" event={"ID":"60bad312-a989-43d1-87e6-6c6f10d1ae8f","Type":"ContainerStarted","Data":"ae95d143ffd19524bc2f0012ed6fc8f8a0f41849bc152802007e707635b34cd9"} Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.044582 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.045916 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" event={"ID":"8336a5f7-2ff0-440a-88b0-a6ab51692965","Type":"ContainerStarted","Data":"b7d5ac4594945191586b7fa6b5eb9940f8353711f6663591b40371cae7064c56"} Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.090344 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podStartSLOduration=5.083365445 podStartE2EDuration="52.090329255s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.33817492 +0000 UTC m=+1025.833895842" lastFinishedPulling="2026-02-18 14:17:20.34513873 +0000 UTC m=+1072.840859652" observedRunningTime="2026-02-18 14:17:21.088715304 +0000 UTC m=+1073.584436226" watchObservedRunningTime="2026-02-18 14:17:21.090329255 +0000 UTC m=+1073.586050177" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.106780 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podStartSLOduration=4.200252639 podStartE2EDuration="52.106762982s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.616755046 +0000 UTC m=+1025.112475968" lastFinishedPulling="2026-02-18 14:17:20.523265389 +0000 UTC m=+1073.018986311" observedRunningTime="2026-02-18 14:17:21.103271356 +0000 UTC m=+1073.598992288" watchObservedRunningTime="2026-02-18 14:17:21.106762982 +0000 UTC m=+1073.602483904" Feb 18 14:17:21 crc kubenswrapper[4739]: I0218 14:17:21.134946 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podStartSLOduration=4.28652829 podStartE2EDuration="52.134920961s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.002661158 +0000 UTC m=+1025.498382080" lastFinishedPulling="2026-02-18 14:17:20.851053839 +0000 UTC m=+1073.346774751" observedRunningTime="2026-02-18 14:17:21.123748804 +0000 UTC m=+1073.619469726" watchObservedRunningTime="2026-02-18 14:17:21.134920961 +0000 UTC m=+1073.630641903" Feb 18 14:17:22 crc kubenswrapper[4739]: I0218 14:17:22.588415 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.071780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" event={"ID":"40be8fff-51f0-467a-aca5-517e02eea23b","Type":"ContainerStarted","Data":"683f1a0cb3d323ab64501f029c46a596f10b1e3cdb67aa24d85f590ebb041579"} Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.072547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.074852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" event={"ID":"06163b75-4f40-42a0-83d8-70c935b9172c","Type":"ContainerStarted","Data":"9f2624b4d098577f1d1f21dcd591e0fbf59f2207a8e12521154aa447bfb715be"} Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.077142 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" event={"ID":"2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682","Type":"ContainerStarted","Data":"06f4cd242b305b5c897a9f466c332305032898fdc501afc63b66c8d18af6c3b3"} Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.077427 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.095413 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podStartSLOduration=4.396738472 podStartE2EDuration="55.095394432s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.324849945 +0000 UTC m=+1024.820570867" lastFinishedPulling="2026-02-18 14:17:23.023505905 +0000 UTC m=+1075.519226827" observedRunningTime="2026-02-18 14:17:24.090155092 +0000 UTC m=+1076.585876004" watchObservedRunningTime="2026-02-18 14:17:24.095394432 +0000 UTC m=+1076.591115354" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.113806 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podStartSLOduration=4.824085233 podStartE2EDuration="55.113782728s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:32.538142576 +0000 UTC m=+1025.033863498" lastFinishedPulling="2026-02-18 14:17:22.827840061 +0000 UTC m=+1075.323560993" observedRunningTime="2026-02-18 14:17:24.10619841 +0000 UTC m=+1076.601919332" watchObservedRunningTime="2026-02-18 14:17:24.113782728 +0000 UTC m=+1076.609503670" Feb 18 14:17:24 crc kubenswrapper[4739]: I0218 14:17:24.134570 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gszz" podStartSLOduration=4.579704585 podStartE2EDuration="54.134542663s" podCreationTimestamp="2026-02-18 14:16:30 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.469701322 +0000 UTC m=+1025.965422244" lastFinishedPulling="2026-02-18 14:17:23.02453941 +0000 UTC m=+1075.520260322" observedRunningTime="2026-02-18 14:17:24.12230518 +0000 UTC m=+1076.618026112" watchObservedRunningTime="2026-02-18 14:17:24.134542663 +0000 UTC m=+1076.630263575" Feb 18 14:17:25 crc kubenswrapper[4739]: E0218 14:17:25.411673 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" Feb 18 14:17:25 crc kubenswrapper[4739]: I0218 14:17:25.588341 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 14:17:26 crc kubenswrapper[4739]: I0218 14:17:26.445291 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.373076 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.373418 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.769700 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.849084 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" Feb 18 14:17:29 crc kubenswrapper[4739]: I0218 14:17:29.908915 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.279951 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.626105 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.670941 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.679262 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" Feb 18 14:17:30 crc kubenswrapper[4739]: I0218 14:17:30.954423 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 14:17:39 crc kubenswrapper[4739]: I0218 14:17:39.192667 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerStarted","Data":"0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c"} Feb 18 14:17:39 crc kubenswrapper[4739]: I0218 14:17:39.193529 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:17:39 crc kubenswrapper[4739]: I0218 14:17:39.210969 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podStartSLOduration=4.8019964139999995 podStartE2EDuration="1m10.210951728s" podCreationTimestamp="2026-02-18 14:16:29 +0000 UTC" firstStartedPulling="2026-02-18 14:16:33.470695287 +0000 UTC m=+1025.966416209" lastFinishedPulling="2026-02-18 14:17:38.879650601 +0000 UTC m=+1091.375371523" observedRunningTime="2026-02-18 14:17:39.207255957 +0000 UTC m=+1091.702976879" watchObservedRunningTime="2026-02-18 14:17:39.210951728 +0000 UTC m=+1091.706672640" Feb 18 14:17:51 crc kubenswrapper[4739]: I0218 14:17:51.063565 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.372947 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.373367 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.373408 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.374121 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:17:59 crc kubenswrapper[4739]: I0218 14:17:59.374169 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0" gracePeriod=600 Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.384607 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0" exitCode=0 Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.384685 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0"} Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.385981 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a"} Feb 18 14:18:00 crc kubenswrapper[4739]: I0218 14:18:00.386075 4739 scope.go:117] "RemoveContainer" containerID="808b39463ceef987da7bce6ba35b68857fd03ff372e8d867a6a7724e8f73df41" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.424732 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.427938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.431970 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.432055 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.432177 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.432177 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jdmzz" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.437052 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.440934 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.440998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.499957 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.505174 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.507349 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.522759 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542778 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.542986 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.544387 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.561400 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"dnsmasq-dns-675f4bcbfc-xpfnx\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.644269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.644337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.644375 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.645191 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.645561 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.663023 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"dnsmasq-dns-78dd6ddcc-7xg2n\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.758199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:08 crc kubenswrapper[4739]: I0218 14:18:08.822122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.305664 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.424436 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:09 crc kubenswrapper[4739]: W0218 14:18:09.428907 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa473d6_d18d_484f_ae1e_8691ed20efa1.slice/crio-c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d WatchSource:0}: Error finding container c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d: Status 404 returned error can't find the container with id c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.501434 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" event={"ID":"1a5000d3-4c10-42f8-9912-1fa1628fd929","Type":"ContainerStarted","Data":"4808e9e85e6feee30fab77e12dbad19f1e8587e014af2fadd4de7f34a6f67e25"} Feb 18 14:18:09 crc kubenswrapper[4739]: I0218 14:18:09.502846 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" event={"ID":"eaa473d6-d18d-484f-ae1e-8691ed20efa1","Type":"ContainerStarted","Data":"c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d"} Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.249542 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.289692 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.291147 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.308246 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.410119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.410590 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.410669 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.512542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.512693 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.512787 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.514732 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.518040 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.549972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"dnsmasq-dns-666b6646f7-c68ds\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.620786 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.628514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.672044 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.673672 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.706827 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.824059 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.824142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.825407 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.927433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.927808 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.927834 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.928795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.934750 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:11 crc kubenswrapper[4739]: I0218 14:18:11.959229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"dnsmasq-dns-57d769cc4f-q9846\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.081873 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.174594 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.438966 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.447308 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451387 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451766 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451879 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.451902 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.452070 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.458217 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.460658 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bkpbw" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.472436 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.474771 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.528414 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.533751 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.577492 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.594011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerStarted","Data":"818a67c85ce926301db3afa89b1bb5c3ac9bbdbced8966f71ba1d63af4f883cc"} Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.599348 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667456 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667696 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667843 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.667945 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668166 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668596 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668809 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.668963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.669033 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.670979 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.671134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.671854 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.671961 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672034 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672168 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.672883 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673006 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673029 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673286 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673348 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673374 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.673667 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.676291 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775656 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775742 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775834 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.775968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776427 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776474 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776497 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776739 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776796 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.776895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777571 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777622 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.777647 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.781529 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.782221 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.782421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.786032 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.789818 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.790198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.791101 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.796033 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.796600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.796912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.797397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.798396 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.800239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.800673 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802133 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802202 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1542ad1e95f6d05e9b33a4f8791d4ee2fe2b5bce9c9209ea9b163f0535bf4310/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.802268 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.808671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.808915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.812234 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.812756 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.814324 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.816292 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.820475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.820508 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.820554 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e4a135f402bfdd87a0dd9dc00d6afd10d61dd6559041546aff07ddf4aa84ac2/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.821790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.824800 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.830183 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.830669 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.845739 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.845896 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.845963 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/42f2352e597643fb9091206ae40b48fcb025360f730dba5ba00ebee7f81842b7/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.847676 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.855748 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.864397 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.874940 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875171 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875307 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875542 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875801 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.875930 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvn4l" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.876090 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.885358 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.892016 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.901542 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " pod="openstack/rabbitmq-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.941307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " pod="openstack/rabbitmq-server-1" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.957504 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " pod="openstack/rabbitmq-server-2" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982198 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982280 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982398 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982474 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982605 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982878 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.982902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.983039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:12 crc kubenswrapper[4739]: I0218 14:18:12.983114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085144 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085271 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085396 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085531 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.085554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.086557 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.086663 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.086736 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087051 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.087896 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.089640 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.089933 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.089967 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b4e22e9c66b4b9e31fc01977dfa2f505609dd5b0e95d61de241c54ade9d7a505/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.091212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.092384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.093323 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.094763 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.110745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.113585 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.157414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:13 crc kubenswrapper[4739]: I0218 14:18:13.213608 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.284207 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.642560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerStarted","Data":"8bde76f9b97130d02eb6cd439713bddac781417cc738a4a05c1874baac5770d7"} Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.945183 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.947022 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.957272 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2snlj" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.957602 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.957761 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.958264 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.961536 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:13.970237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019598 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njg9\" (UniqueName: \"kubernetes.io/projected/acc9bbc5-8705-410b-977b-ca9245834e36-kube-api-access-2njg9\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019843 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019909 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-kolla-config\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.019997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-default\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122225 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-kolla-config\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122274 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122346 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-default\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122416 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njg9\" (UniqueName: \"kubernetes.io/projected/acc9bbc5-8705-410b-977b-ca9245834e36-kube-api-access-2njg9\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.122491 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.123934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-generated\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.126680 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-operator-scripts\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.127942 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-config-data-default\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.128231 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/acc9bbc5-8705-410b-977b-ca9245834e36-kolla-config\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.136917 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.136973 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8876dd33e35d37c7675be2db671fde3d51837d411544d5fae18d0a50fb274985/globalmount\"" pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.140108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.145024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njg9\" (UniqueName: \"kubernetes.io/projected/acc9bbc5-8705-410b-977b-ca9245834e36-kube-api-access-2njg9\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.156816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/acc9bbc5-8705-410b-977b-ca9245834e36-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.306054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-128cd24f-aa04-4a31-b42b-c6becf71901c\") pod \"openstack-galera-0\" (UID: \"acc9bbc5-8705-410b-977b-ca9245834e36\") " pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:14.594574 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.355376 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.360652 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365061 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zbswg" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365231 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365386 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.365740 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.385648 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.463877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.463989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464524 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9fvx\" (UniqueName: \"kubernetes.io/projected/869aa11b-eba7-4598-90dc-d50c642b9120-kube-api-access-x9fvx\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464841 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.464887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.465122 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.465965 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.567754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.568090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.568192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569469 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.569504 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9fvx\" (UniqueName: \"kubernetes.io/projected/869aa11b-eba7-4598-90dc-d50c642b9120-kube-api-access-x9fvx\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.571091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.571115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.572154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.572707 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/869aa11b-eba7-4598-90dc-d50c642b9120-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.577887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.587602 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.587739 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869aa11b-eba7-4598-90dc-d50c642b9120-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.589653 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.594964 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.595309 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.595422 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3f7aab65c980fc379d7c82b79c526e7d4095da6614b07787895a3f513563c855/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.596379 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.597328 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zvx9p" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.603223 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.644892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9fvx\" (UniqueName: \"kubernetes.io/projected/869aa11b-eba7-4598-90dc-d50c642b9120-kube-api-access-x9fvx\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673615 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673881 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-kolla-config\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tr2j\" (UniqueName: \"kubernetes.io/projected/39286c8b-55e8-41a2-9f36-a7ce475e8313-kube-api-access-8tr2j\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.673923 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-config-data\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.704510 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5845389b-9f0a-44e0-9fcc-440e420b60f5\") pod \"openstack-cell1-galera-0\" (UID: \"869aa11b-eba7-4598-90dc-d50c642b9120\") " pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776217 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776338 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-kolla-config\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tr2j\" (UniqueName: \"kubernetes.io/projected/39286c8b-55e8-41a2-9f36-a7ce475e8313-kube-api-access-8tr2j\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-config-data\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.776495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.778719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-config-data\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.780284 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/39286c8b-55e8-41a2-9f36-a7ce475e8313-kolla-config\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.794281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-memcached-tls-certs\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.804698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tr2j\" (UniqueName: \"kubernetes.io/projected/39286c8b-55e8-41a2-9f36-a7ce475e8313-kube-api-access-8tr2j\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:15 crc kubenswrapper[4739]: I0218 14:18:15.819871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39286c8b-55e8-41a2-9f36-a7ce475e8313-combined-ca-bundle\") pod \"memcached-0\" (UID: \"39286c8b-55e8-41a2-9f36-a7ce475e8313\") " pod="openstack/memcached-0" Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.008181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.057199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.474800 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.475298 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.497199 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.505667 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacc9bbc5_8705_410b_977b_ca9245834e36.slice/crio-15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0 WatchSource:0}: Error finding container 15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0: Status 404 returned error can't find the container with id 15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0 Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.531848 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846b1cf2_bffb_4eca_a8f2_f3c0fcc7ac4b.slice/crio-a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b WatchSource:0}: Error finding container a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b: Status 404 returned error can't find the container with id a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.540781 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.547268 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70500a97_2717_4761_884a_25cf8ab89380.slice/crio-6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec WatchSource:0}: Error finding container 6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec: Status 404 returned error can't find the container with id 6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.570507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.597931 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5594aaa_fab3_4dad_b79e_17200bc2f1ee.slice/crio-95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8 WatchSource:0}: Error finding container 95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8: Status 404 returned error can't find the container with id 95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8 Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.713688 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerStarted","Data":"6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.726191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerStarted","Data":"a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.728701 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"15cc3d6411750a8db4747d49c4c5a5a2ab343064e092cd9dfdde295934512fc0"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.730043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerStarted","Data":"95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.731524 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerStarted","Data":"d4d2f4d954b6b105d9d4d012df3327d247d4b0d91bb0c3076d3bbe9f637b4cc0"} Feb 18 14:18:16 crc kubenswrapper[4739]: I0218 14:18:16.850883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 14:18:16 crc kubenswrapper[4739]: W0218 14:18:16.859807 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod869aa11b_eba7_4598_90dc_d50c642b9120.slice/crio-755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf WatchSource:0}: Error finding container 755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf: Status 404 returned error can't find the container with id 755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf Feb 18 14:18:17 crc kubenswrapper[4739]: I0218 14:18:17.077850 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 14:18:17 crc kubenswrapper[4739]: I0218 14:18:17.771007 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"755c86cf8719b7d95450ac686ea1aaa7455b0563e40ff67ef44a26a4978d5cdf"} Feb 18 14:18:17 crc kubenswrapper[4739]: I0218 14:18:17.774517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"39286c8b-55e8-41a2-9f36-a7ce475e8313","Type":"ContainerStarted","Data":"0eb41db429ddb736d60791618a1381bad01ee13af0c05c50d21ae73ca7a4d49c"} Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.316695 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.318244 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.320835 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-8hmh8" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.348633 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.461681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"kube-state-metrics-0\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.572763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"kube-state-metrics-0\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.655250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"kube-state-metrics-0\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " pod="openstack/kube-state-metrics-0" Feb 18 14:18:18 crc kubenswrapper[4739]: I0218 14:18:18.950082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.367724 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.370011 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.381798 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-l487j" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.381984 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.408484 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.530687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.530823 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt8s4\" (UniqueName: \"kubernetes.io/projected/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-kube-api-access-gt8s4\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.639353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt8s4\" (UniqueName: \"kubernetes.io/projected/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-kube-api-access-gt8s4\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.639657 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: E0218 14:18:19.639931 4739 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Feb 18 14:18:19 crc kubenswrapper[4739]: E0218 14:18:19.640007 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert podName:7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b nodeName:}" failed. No retries permitted until 2026-02-18 14:18:20.139982084 +0000 UTC m=+1132.635703006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert") pod "observability-ui-dashboards-66cbf594b5-m5hn7" (UID: "7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b") : secret "observability-ui-dashboards" not found Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.681505 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt8s4\" (UniqueName: \"kubernetes.io/projected/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-kube-api-access-gt8s4\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.885091 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b9f98d489-4zk5t"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.887479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.919000 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.927566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.955874 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b9f98d489-4zk5t"] Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990238 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990518 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990729 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.990884 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.991392 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.991746 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 14:18:19 crc kubenswrapper[4739]: I0218 14:18:19.998073 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nz745" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.011989 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.048798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.048872 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-oauth-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049241 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049264 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049365 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049397 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049500 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-service-ca\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049576 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-trusted-ca-bundle\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049707 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-oauth-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.049750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrzp\" (UniqueName: \"kubernetes.io/projected/39496c01-fddc-4d5c-8c1a-32af402a87cd-kube-api-access-wgrzp\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.051526 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.151963 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152016 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-oauth-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgrzp\" (UniqueName: \"kubernetes.io/projected/39496c01-fddc-4d5c-8c1a-32af402a87cd-kube-api-access-wgrzp\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152102 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152140 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-oauth-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152222 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152338 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152433 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152560 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-service-ca\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.152590 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.153475 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-oauth-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.153818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-trusted-ca-bundle\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154398 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.154871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.155632 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-service-ca\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.162288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.175288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.175836 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m5hn7\" (UID: \"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.176471 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-serving-cert\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.186844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.208399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgrzp\" (UniqueName: \"kubernetes.io/projected/39496c01-fddc-4d5c-8c1a-32af402a87cd-kube-api-access-wgrzp\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.215517 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.221524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.222360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/39496c01-fddc-4d5c-8c1a-32af402a87cd-console-oauth-config\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.243497 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.255411 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.290892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39496c01-fddc-4d5c-8c1a-32af402a87cd-trusted-ca-bundle\") pod \"console-b9f98d489-4zk5t\" (UID: \"39496c01-fddc-4d5c-8c1a-32af402a87cd\") " pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.301537 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.301763 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01cfb519e92c9e23501f00a5b6c703ca97cb1b944d5fe5c6aa349ce505ad2fe2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.338922 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.400867 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.532780 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.640010 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.963589 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zz64p"] Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.965854 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.970973 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nvrtf" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.971619 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.971858 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 14:18:20 crc kubenswrapper[4739]: I0218 14:18:20.979883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.051615 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5cglq"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.053952 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.065841 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cglq"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115240 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-run\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-combined-ca-bundle\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-log-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115485 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7s2q\" (UniqueName: \"kubernetes.io/projected/7289493d-f197-436b-bc45-84721d12c034-kube-api-access-h7s2q\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-ovn-controller-tls-certs\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115541 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-lib\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115588 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-scripts\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-log\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115654 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7289493d-f197-436b-bc45-84721d12c034-scripts\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-etc-ovs\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.115720 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxk2w\" (UniqueName: \"kubernetes.io/projected/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-kube-api-access-wxk2w\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-etc-ovs\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218344 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxk2w\" (UniqueName: \"kubernetes.io/projected/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-kube-api-access-wxk2w\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218425 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-run\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-combined-ca-bundle\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218586 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-log-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7s2q\" (UniqueName: \"kubernetes.io/projected/7289493d-f197-436b-bc45-84721d12c034-kube-api-access-h7s2q\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218631 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-ovn-controller-tls-certs\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-lib\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218701 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-scripts\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218729 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-log\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.218746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7289493d-f197-436b-bc45-84721d12c034-scripts\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.219798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-log-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220023 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-run\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-etc-ovs\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220637 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run-ovn\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220848 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-lib\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.220911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7289493d-f197-436b-bc45-84721d12c034-var-run\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.221027 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7289493d-f197-436b-bc45-84721d12c034-scripts\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.221271 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-var-log\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.223972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-scripts\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.227214 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-combined-ca-bundle\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.241030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289493d-f197-436b-bc45-84721d12c034-ovn-controller-tls-certs\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.249419 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxk2w\" (UniqueName: \"kubernetes.io/projected/3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7-kube-api-access-wxk2w\") pod \"ovn-controller-ovs-5cglq\" (UID: \"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7\") " pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.273493 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7s2q\" (UniqueName: \"kubernetes.io/projected/7289493d-f197-436b-bc45-84721d12c034-kube-api-access-h7s2q\") pod \"ovn-controller-zz64p\" (UID: \"7289493d-f197-436b-bc45-84721d12c034\") " pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.318155 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.394514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.493319 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.494864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.500492 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-24fl6" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.500776 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.500937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.502491 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.507837 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.511057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641083 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-config\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641508 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641625 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641651 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22289461-6c53-461c-adfe-0f1cd7209928-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641697 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.641830 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74w7\" (UniqueName: \"kubernetes.io/projected/22289461-6c53-461c-adfe-0f1cd7209928-kube-api-access-q74w7\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744768 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22289461-6c53-461c-adfe-0f1cd7209928-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744844 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744946 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74w7\" (UniqueName: \"kubernetes.io/projected/22289461-6c53-461c-adfe-0f1cd7209928-kube-api-access-q74w7\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.744999 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-config\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.745058 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.746932 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.748062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22289461-6c53-461c-adfe-0f1cd7209928-config\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.748119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.749700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22289461-6c53-461c-adfe-0f1cd7209928-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.752696 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.755639 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.755681 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c43fcf5985af0c5e34aea6c044b6fe94957dce2fb6216756fe3ecd427fa83e65/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.773171 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22289461-6c53-461c-adfe-0f1cd7209928-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.773642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74w7\" (UniqueName: \"kubernetes.io/projected/22289461-6c53-461c-adfe-0f1cd7209928-kube-api-access-q74w7\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.826001 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97b89dbc-a33a-47a0-8df0-c299d08c8362\") pod \"ovsdbserver-nb-0\" (UID: \"22289461-6c53-461c-adfe-0f1cd7209928\") " pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:21 crc kubenswrapper[4739]: I0218 14:18:21.835156 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.018127 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.020264 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.025113 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.025364 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.027991 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-2djtj" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.028276 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.046638 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138314 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-config\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138497 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138531 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgftg\" (UniqueName: \"kubernetes.io/projected/74c434ad-eea8-4896-b65d-26eb1ca89f84-kube-api-access-sgftg\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138616 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.138741 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241174 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-config\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241310 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.241387 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgftg\" (UniqueName: \"kubernetes.io/projected/74c434ad-eea8-4896-b65d-26eb1ca89f84-kube-api-access-sgftg\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.242917 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.242940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.243217 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c434ad-eea8-4896-b65d-26eb1ca89f84-config\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248132 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248203 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248632 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.248661 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ad264258ca460ea0cafe0fa90875c9c3a404027f6d2571fa7d126eda6292dab/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.267501 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgftg\" (UniqueName: \"kubernetes.io/projected/74c434ad-eea8-4896-b65d-26eb1ca89f84-kube-api-access-sgftg\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.282185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c434ad-eea8-4896-b65d-26eb1ca89f84-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.285896 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dd2d398-5fef-478f-bbf7-fa8b868c9d46\") pod \"ovsdbserver-sb-0\" (UID: \"74c434ad-eea8-4896-b65d-26eb1ca89f84\") " pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:25 crc kubenswrapper[4739]: I0218 14:18:25.350903 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.398401 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.399273 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf5kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f34a572d-30ca-4de5-bf27-3371e1e9d197): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.400683 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.409083 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.409336 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqscd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(70500a97-2717-4761-884a-25cf8ab89380): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:39 crc kubenswrapper[4739]: E0218 14:18:39.411295 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" Feb 18 14:18:39 crc kubenswrapper[4739]: I0218 14:18:39.848289 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:18:40 crc kubenswrapper[4739]: E0218 14:18:40.108880 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" Feb 18 14:18:40 crc kubenswrapper[4739]: E0218 14:18:40.109643 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.924162 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.924344 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2njg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(acc9bbc5-8705-410b-977b-ca9245834e36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.925685 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.961295 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.961541 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h92gx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(a5594aaa-fab3-4dad-b79e-17200bc2f1ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:41 crc kubenswrapper[4739]: E0218 14:18:41.966651 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.122824 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.122832 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.226867 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.227077 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vbxbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.228231 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" Feb 18 14:18:42 crc kubenswrapper[4739]: I0218 14:18:42.241869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p"] Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.391587 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.391779 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x9fvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(869aa11b-eba7-4598-90dc-d50c642b9120): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:42 crc kubenswrapper[4739]: E0218 14:18:42.393843 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.134640 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.134646 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.195636 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.195942 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5b4h59ch77hd4h658h675h59bh589h5dbh65chc8hf8h574h5b9h7bh88h78h689hc8h59fh686h5c5h68fh697h544h596h5c4h5d8h678h684hdh7fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(39286c8b-55e8-41a2-9f36-a7ce475e8313): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:43 crc kubenswrapper[4739]: E0218 14:18:43.197746 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="39286c8b-55e8-41a2-9f36-a7ce475e8313" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.134823 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.135531 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrtrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-xpfnx_openstack(1a5000d3-4c10-42f8-9912-1fa1628fd929): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.136714 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" podUID="1a5000d3-4c10-42f8-9912-1fa1628fd929" Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.148177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p" event={"ID":"7289493d-f197-436b-bc45-84721d12c034","Type":"ContainerStarted","Data":"f7b528ec5bf80240e768104dd19a13182dfb81fde383ba626533cbd10bfda010"} Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.151202 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"f97314f9f73b65ab6d585d1190d55be82b1924ce7010a229a6c53d15da07f316"} Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.152641 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="39286c8b-55e8-41a2-9f36-a7ce475e8313" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.251289 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.251811 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcpzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-q9846_openstack(d3e2e1a1-a8f7-47c1-9964-399a7d9837fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.253998 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.342862 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.343020 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrjt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7xg2n_openstack(eaa473d6-d18d-484f-ae1e-8691ed20efa1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.344402 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" podUID="eaa473d6-d18d-484f-ae1e-8691ed20efa1" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.535333 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.535682 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gm6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-c68ds_openstack(6be5923f-70ed-45b5-a747-d4008eaeb656): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:18:44 crc kubenswrapper[4739]: E0218 14:18:44.537157 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.919846 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:18:44 crc kubenswrapper[4739]: I0218 14:18:44.944298 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b9f98d489-4zk5t"] Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.046057 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7"] Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.165569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b9f98d489-4zk5t" event={"ID":"39496c01-fddc-4d5c-8c1a-32af402a87cd","Type":"ContainerStarted","Data":"0f08196eba7ddd3d1a29a2e9ff2f40c7cf5486a0d373a5269798df70b00991dd"} Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.165626 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b9f98d489-4zk5t" event={"ID":"39496c01-fddc-4d5c-8c1a-32af402a87cd","Type":"ContainerStarted","Data":"d48d43d75d9d6ea021fafb66b5ab83ecad75207522f3ea950644f9290946fe01"} Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.167016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" event={"ID":"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b","Type":"ContainerStarted","Data":"8b24603fbb6613a07858086dba4e21c9e54933a795d6b41a1e0a25ca445d072c"} Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.168755 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerStarted","Data":"2bc5886939c37fb1062674e7d0eff4b81f7f7a7b2294e0f4745de8bbbca3ba11"} Feb 18 14:18:45 crc kubenswrapper[4739]: E0218 14:18:45.171417 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" Feb 18 14:18:45 crc kubenswrapper[4739]: E0218 14:18:45.171520 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.688132 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.742004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") pod \"1a5000d3-4c10-42f8-9912-1fa1628fd929\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.742242 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") pod \"1a5000d3-4c10-42f8-9912-1fa1628fd929\" (UID: \"1a5000d3-4c10-42f8-9912-1fa1628fd929\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.744377 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config" (OuterVolumeSpecName: "config") pod "1a5000d3-4c10-42f8-9912-1fa1628fd929" (UID: "1a5000d3-4c10-42f8-9912-1fa1628fd929"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.751356 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh" (OuterVolumeSpecName: "kube-api-access-rrtrh") pod "1a5000d3-4c10-42f8-9912-1fa1628fd929" (UID: "1a5000d3-4c10-42f8-9912-1fa1628fd929"). InnerVolumeSpecName "kube-api-access-rrtrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.824679 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.857996 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a5000d3-4c10-42f8-9912-1fa1628fd929-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.858030 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrtrh\" (UniqueName: \"kubernetes.io/projected/1a5000d3-4c10-42f8-9912-1fa1628fd929-kube-api-access-rrtrh\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.867552 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cglq"] Feb 18 14:18:45 crc kubenswrapper[4739]: W0218 14:18:45.879061 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d6d7ab5_2170_48ba_b9bf_40da1ab8fdf7.slice/crio-af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4 WatchSource:0}: Error finding container af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4: Status 404 returned error can't find the container with id af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4 Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.959408 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") pod \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.959524 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") pod \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.959577 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") pod \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\" (UID: \"eaa473d6-d18d-484f-ae1e-8691ed20efa1\") " Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.960029 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eaa473d6-d18d-484f-ae1e-8691ed20efa1" (UID: "eaa473d6-d18d-484f-ae1e-8691ed20efa1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.960127 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config" (OuterVolumeSpecName: "config") pod "eaa473d6-d18d-484f-ae1e-8691ed20efa1" (UID: "eaa473d6-d18d-484f-ae1e-8691ed20efa1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.961278 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.961302 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eaa473d6-d18d-484f-ae1e-8691ed20efa1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:45 crc kubenswrapper[4739]: I0218 14:18:45.962996 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9" (OuterVolumeSpecName: "kube-api-access-vrjt9") pod "eaa473d6-d18d-484f-ae1e-8691ed20efa1" (UID: "eaa473d6-d18d-484f-ae1e-8691ed20efa1"). InnerVolumeSpecName "kube-api-access-vrjt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.065687 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrjt9\" (UniqueName: \"kubernetes.io/projected/eaa473d6-d18d-484f-ae1e-8691ed20efa1-kube-api-access-vrjt9\") on node \"crc\" DevicePath \"\"" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.182634 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" event={"ID":"1a5000d3-4c10-42f8-9912-1fa1628fd929","Type":"ContainerDied","Data":"4808e9e85e6feee30fab77e12dbad19f1e8587e014af2fadd4de7f34a6f67e25"} Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.182744 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-xpfnx" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.184769 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"af5449ec9b1fdc0308db2c932a8c84b4af1d08552a68d8a4890dcedddfdab8c4"} Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.187342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" event={"ID":"eaa473d6-d18d-484f-ae1e-8691ed20efa1","Type":"ContainerDied","Data":"c664961af5f5933902fb83588ea3526b81c5f95ad0a6dd0e56eacb644586d63d"} Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.187359 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7xg2n" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.228242 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b9f98d489-4zk5t" podStartSLOduration=27.22822313 podStartE2EDuration="27.22822313s" podCreationTimestamp="2026-02-18 14:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:18:46.217687637 +0000 UTC m=+1158.713408579" watchObservedRunningTime="2026-02-18 14:18:46.22822313 +0000 UTC m=+1158.723944062" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.274671 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.295215 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7xg2n"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.317472 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.328882 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-xpfnx"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.425874 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5000d3-4c10-42f8-9912-1fa1628fd929" path="/var/lib/kubelet/pods/1a5000d3-4c10-42f8-9912-1fa1628fd929/volumes" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.426278 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa473d6-d18d-484f-ae1e-8691ed20efa1" path="/var/lib/kubelet/pods/eaa473d6-d18d-484f-ae1e-8691ed20efa1/volumes" Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.750284 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 14:18:46 crc kubenswrapper[4739]: I0218 14:18:46.877304 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 14:18:49 crc kubenswrapper[4739]: W0218 14:18:49.775735 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74c434ad_eea8_4896_b65d_26eb1ca89f84.slice/crio-cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd WatchSource:0}: Error finding container cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd: Status 404 returned error can't find the container with id cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd Feb 18 14:18:49 crc kubenswrapper[4739]: W0218 14:18:49.776779 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22289461_6c53_461c_adfe_0f1cd7209928.slice/crio-b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2 WatchSource:0}: Error finding container b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2: Status 404 returned error can't find the container with id b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2 Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.220497 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74c434ad-eea8-4896-b65d-26eb1ca89f84","Type":"ContainerStarted","Data":"cbebddd47cdfa4c650dd1c25506e0ed34487bd0cc3995922e180c92ecbb8eafd"} Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.221309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"22289461-6c53-461c-adfe-0f1cd7209928","Type":"ContainerStarted","Data":"b3b526f3e5352197251c26440e9271e44caedacc21ba4f5d11a4e5a4faf29ec2"} Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.533661 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.533977 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:50 crc kubenswrapper[4739]: I0218 14:18:50.539232 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:51 crc kubenswrapper[4739]: I0218 14:18:51.236562 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 14:18:51 crc kubenswrapper[4739]: I0218 14:18:51.311267 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:18:53 crc kubenswrapper[4739]: I0218 14:18:53.251855 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.299324 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" event={"ID":"7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b","Type":"ContainerStarted","Data":"82f79c47c38249a0f8113aec3b2167eaf251f56ebb97ba41b8f99a34053dde50"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.301324 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"410b76e30c037f44b7d028b1f407004690683157575a3e07ed0b9d34ed9c5ec1"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.303321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"874c74820b18d639be27757d978d0db13d377177e4472870e9ded39d3bfa20c9"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.304811 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"22289461-6c53-461c-adfe-0f1cd7209928","Type":"ContainerStarted","Data":"c6681fe2af2fd55098cbbf5b2d0e052ee1979e3c98d9703e64c6493aa37790da"} Feb 18 14:18:57 crc kubenswrapper[4739]: I0218 14:18:57.320951 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m5hn7" podStartSLOduration=28.452850575 podStartE2EDuration="38.320928626s" podCreationTimestamp="2026-02-18 14:18:19 +0000 UTC" firstStartedPulling="2026-02-18 14:18:45.05815156 +0000 UTC m=+1157.553872482" lastFinishedPulling="2026-02-18 14:18:54.926229601 +0000 UTC m=+1167.421950533" observedRunningTime="2026-02-18 14:18:57.314935254 +0000 UTC m=+1169.810656196" watchObservedRunningTime="2026-02-18 14:18:57.320928626 +0000 UTC m=+1169.816649558" Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.316800 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerStarted","Data":"aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.319621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerStarted","Data":"a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.321530 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p" event={"ID":"7289493d-f197-436b-bc45-84721d12c034","Type":"ContainerStarted","Data":"fffe676cfab2c2f4a606a064d4ca13a07363cc63779d67c105c5b541004a6e8a"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.321690 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zz64p" Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.323499 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74c434ad-eea8-4896-b65d-26eb1ca89f84","Type":"ContainerStarted","Data":"ead5562d421aaba5060c11cf9e9f5c887782f5703e83601e3c750ce7f7961098"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.327304 4739 generic.go:334] "Generic (PLEG): container finished" podID="3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7" containerID="410b76e30c037f44b7d028b1f407004690683157575a3e07ed0b9d34ed9c5ec1" exitCode=0 Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.329158 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerDied","Data":"410b76e30c037f44b7d028b1f407004690683157575a3e07ed0b9d34ed9c5ec1"} Feb 18 14:18:58 crc kubenswrapper[4739]: I0218 14:18:58.368463 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zz64p" podStartSLOduration=27.714333302 podStartE2EDuration="38.368421266s" podCreationTimestamp="2026-02-18 14:18:20 +0000 UTC" firstStartedPulling="2026-02-18 14:18:44.108411847 +0000 UTC m=+1156.604132769" lastFinishedPulling="2026-02-18 14:18:54.762499811 +0000 UTC m=+1167.258220733" observedRunningTime="2026-02-18 14:18:58.364819084 +0000 UTC m=+1170.860540006" watchObservedRunningTime="2026-02-18 14:18:58.368421266 +0000 UTC m=+1170.864142188" Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.342318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerStarted","Data":"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.346683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"8827372c966e9288064ea8d3b3f6ec236d747758df5699891a9db62b6e833265"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.350514 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"a3ef49497c95dfe6772ec7c1fb042eaa0e995bd29a78ec8447b2892bb58cef30"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.354001 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerStarted","Data":"854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.354983 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.358381 4739 generic.go:334] "Generic (PLEG): container finished" podID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerID="cfcd2e4e872e3af5710dab363dbe65580e2c5dc1a19ac0d3ddd18b7a4993a7cb" exitCode=0 Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.359750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerDied","Data":"cfcd2e4e872e3af5710dab363dbe65580e2c5dc1a19ac0d3ddd18b7a4993a7cb"} Feb 18 14:18:59 crc kubenswrapper[4739]: I0218 14:18:59.427199 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=29.956467544 podStartE2EDuration="41.427170414s" podCreationTimestamp="2026-02-18 14:18:18 +0000 UTC" firstStartedPulling="2026-02-18 14:18:44.930167092 +0000 UTC m=+1157.425888014" lastFinishedPulling="2026-02-18 14:18:56.400869962 +0000 UTC m=+1168.896590884" observedRunningTime="2026-02-18 14:18:59.409934925 +0000 UTC m=+1171.905655867" watchObservedRunningTime="2026-02-18 14:18:59.427170414 +0000 UTC m=+1171.922891346" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.369823 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac" exitCode=0 Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.369912 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.373509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerStarted","Data":"a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.375394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"39286c8b-55e8-41a2-9f36-a7ce475e8313","Type":"ContainerStarted","Data":"edf3147b8d3130f9675e86b1307940f68245f8d8af9ed1e99164984560a1a39b"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.376698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.380735 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerStarted","Data":"3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.381284 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.382903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"74c434ad-eea8-4896-b65d-26eb1ca89f84","Type":"ContainerStarted","Data":"6cd36b7a4f4aa4fe88020c9da4998dddd480cafb409fd3536e4cea2f42464a7f"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.385382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cglq" event={"ID":"3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7","Type":"ContainerStarted","Data":"90b114409ae0c12df7f5e3c2d0abb3dcbc6832e00511c218de385692da1a3738"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.386024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.386130 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.389042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"22289461-6c53-461c-adfe-0f1cd7209928","Type":"ContainerStarted","Data":"b9d0c9e1cda0978464ffa7aad3ccc13df307b6e1e6e4de19f5cdf27549033bcd"} Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.424546 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podStartSLOduration=3.6371383980000003 podStartE2EDuration="49.424524667s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:12.239484489 +0000 UTC m=+1124.735205411" lastFinishedPulling="2026-02-18 14:18:58.026870768 +0000 UTC m=+1170.522591680" observedRunningTime="2026-02-18 14:19:00.417320604 +0000 UTC m=+1172.913041546" watchObservedRunningTime="2026-02-18 14:19:00.424524667 +0000 UTC m=+1172.920245599" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.479674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.503330262 podStartE2EDuration="37.479649971s" podCreationTimestamp="2026-02-18 14:18:23 +0000 UTC" firstStartedPulling="2026-02-18 14:18:49.862865349 +0000 UTC m=+1162.358586491" lastFinishedPulling="2026-02-18 14:18:59.839185258 +0000 UTC m=+1172.334906200" observedRunningTime="2026-02-18 14:19:00.446734463 +0000 UTC m=+1172.942455395" watchObservedRunningTime="2026-02-18 14:19:00.479649971 +0000 UTC m=+1172.975370893" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.504990 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.487702014 podStartE2EDuration="40.504960836s" podCreationTimestamp="2026-02-18 14:18:20 +0000 UTC" firstStartedPulling="2026-02-18 14:18:49.863828173 +0000 UTC m=+1162.359549095" lastFinishedPulling="2026-02-18 14:18:59.881086985 +0000 UTC m=+1172.376807917" observedRunningTime="2026-02-18 14:19:00.501730844 +0000 UTC m=+1172.997451776" watchObservedRunningTime="2026-02-18 14:19:00.504960836 +0000 UTC m=+1173.000681758" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.530263 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5cglq" podStartSLOduration=31.486322315 podStartE2EDuration="40.53024281s" podCreationTimestamp="2026-02-18 14:18:20 +0000 UTC" firstStartedPulling="2026-02-18 14:18:45.882117991 +0000 UTC m=+1158.377838913" lastFinishedPulling="2026-02-18 14:18:54.926038486 +0000 UTC m=+1167.421759408" observedRunningTime="2026-02-18 14:19:00.522307758 +0000 UTC m=+1173.018028690" watchObservedRunningTime="2026-02-18 14:19:00.53024281 +0000 UTC m=+1173.025963732" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.553486 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.761501492 podStartE2EDuration="45.553459471s" podCreationTimestamp="2026-02-18 14:18:15 +0000 UTC" firstStartedPulling="2026-02-18 14:18:17.160384441 +0000 UTC m=+1129.656105353" lastFinishedPulling="2026-02-18 14:18:59.9523424 +0000 UTC m=+1172.448063332" observedRunningTime="2026-02-18 14:19:00.546987746 +0000 UTC m=+1173.042708698" watchObservedRunningTime="2026-02-18 14:19:00.553459471 +0000 UTC m=+1173.049180413" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.835904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:00 crc kubenswrapper[4739]: I0218 14:19:00.875840 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.351341 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.400669 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.403364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerStarted","Data":"fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5"} Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.403410 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:01 crc kubenswrapper[4739]: I0218 14:19:01.405101 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.417314 4739 generic.go:334] "Generic (PLEG): container finished" podID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerID="fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5" exitCode=0 Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.426808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerDied","Data":"fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5"} Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.471458 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.471876 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.817503 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.913140 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q6g47"] Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.916146 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:02 crc kubenswrapper[4739]: I0218 14:19:02.931507 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.004056 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.010643 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.016085 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.030345 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovn-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.030493 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5gk\" (UniqueName: \"kubernetes.io/projected/8daa97ee-3449-4043-8218-71aaa601c37c-kube-api-access-dh5gk\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.030686 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.031044 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovs-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.031080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8daa97ee-3449-4043-8218-71aaa601c37c-config\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.031131 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-combined-ca-bundle\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.053562 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q6g47"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.092036 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.111731 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.113330 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.116562 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.116750 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.116859 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.117055 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cn9lh" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.120806 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132751 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovn-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132831 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132857 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh5gk\" (UniqueName: \"kubernetes.io/projected/8daa97ee-3449-4043-8218-71aaa601c37c-kube-api-access-dh5gk\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.132977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133002 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8daa97ee-3449-4043-8218-71aaa601c37c-config\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133018 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovs-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-combined-ca-bundle\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.133126 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.134682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovn-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.135232 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8daa97ee-3449-4043-8218-71aaa601c37c-ovs-rundir\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.136016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8daa97ee-3449-4043-8218-71aaa601c37c-config\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.136069 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.136307 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" containerID="cri-o://3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18" gracePeriod=10 Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.144349 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-combined-ca-bundle\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.153822 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8daa97ee-3449-4043-8218-71aaa601c37c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.156357 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.158002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh5gk\" (UniqueName: \"kubernetes.io/projected/8daa97ee-3449-4043-8218-71aaa601c37c-kube-api-access-dh5gk\") pod \"ovn-controller-metrics-q6g47\" (UID: \"8daa97ee-3449-4043-8218-71aaa601c37c\") " pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.159282 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.165067 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.174537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.234955 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235001 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lkd\" (UniqueName: \"kubernetes.io/projected/b3be45be-9ee4-4114-b2e5-78d9b0341129-kube-api-access-w4lkd\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235072 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235112 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235147 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235170 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-config\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235386 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235520 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235583 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235618 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-scripts\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.235719 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.237271 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.238477 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.241593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.255379 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"dnsmasq-dns-5bf47b49b7-mgk2p\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.271141 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q6g47" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.338483 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339566 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-scripts\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.339995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.340106 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.340988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341118 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lkd\" (UniqueName: \"kubernetes.io/projected/b3be45be-9ee4-4114-b2e5-78d9b0341129-kube-api-access-w4lkd\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-config\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341562 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.341957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.342838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-scripts\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.344017 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.344346 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.344653 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3be45be-9ee4-4114-b2e5-78d9b0341129-config\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.351238 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.413858 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.414257 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lkd\" (UniqueName: \"kubernetes.io/projected/b3be45be-9ee4-4114-b2e5-78d9b0341129-kube-api-access-w4lkd\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.414866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.416375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3be45be-9ee4-4114-b2e5-78d9b0341129-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3be45be-9ee4-4114-b2e5-78d9b0341129\") " pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.419432 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"dnsmasq-dns-8554648995-gf2dl\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.520163 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.542700 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:03 crc kubenswrapper[4739]: I0218 14:19:03.906079 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:03 crc kubenswrapper[4739]: W0218 14:19:03.916784 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3866887c_44e3_4436_bd88_bbc56f572f77.slice/crio-01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515 WatchSource:0}: Error finding container 01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515: Status 404 returned error can't find the container with id 01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515 Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.174363 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q6g47"] Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.277598 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.286898 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.450877 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerStarted","Data":"9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.451142 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" containerID="cri-o://9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb" gracePeriod=10 Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.451533 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.454738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q6g47" event={"ID":"8daa97ee-3449-4043-8218-71aaa601c37c","Type":"ContainerStarted","Data":"b0fb7507bd04ddb20fc9d1843f66653d61547463570194834cee73f2779dcc6b"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.457750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerStarted","Data":"01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.458703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerStarted","Data":"6c0344dcd1980d3e621d946739f4b13130dbeab96724b311a0270793512ebb0c"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.460614 4739 generic.go:334] "Generic (PLEG): container finished" podID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerID="3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18" exitCode=0 Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.460707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerDied","Data":"3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.461691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3be45be-9ee4-4114-b2e5-78d9b0341129","Type":"ContainerStarted","Data":"4bb9d2508f342a005d4553e95f9b8ae69a3950ee2fff78abc67f8fbc5d7c9871"} Feb 18 14:19:04 crc kubenswrapper[4739]: I0218 14:19:04.473266 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" podStartSLOduration=-9223371983.381535 podStartE2EDuration="53.473240491s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:12.682473129 +0000 UTC m=+1125.178194051" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:04.469928146 +0000 UTC m=+1176.965649088" watchObservedRunningTime="2026-02-18 14:19:04.473240491 +0000 UTC m=+1176.968961413" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.223614 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.314756 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") pod \"6be5923f-70ed-45b5-a747-d4008eaeb656\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.314926 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") pod \"6be5923f-70ed-45b5-a747-d4008eaeb656\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.315001 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") pod \"6be5923f-70ed-45b5-a747-d4008eaeb656\" (UID: \"6be5923f-70ed-45b5-a747-d4008eaeb656\") " Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.338970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x" (OuterVolumeSpecName: "kube-api-access-9gm6x") pod "6be5923f-70ed-45b5-a747-d4008eaeb656" (UID: "6be5923f-70ed-45b5-a747-d4008eaeb656"). InnerVolumeSpecName "kube-api-access-9gm6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.369036 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config" (OuterVolumeSpecName: "config") pod "6be5923f-70ed-45b5-a747-d4008eaeb656" (UID: "6be5923f-70ed-45b5-a747-d4008eaeb656"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.378661 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6be5923f-70ed-45b5-a747-d4008eaeb656" (UID: "6be5923f-70ed-45b5-a747-d4008eaeb656"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.418828 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.418874 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gm6x\" (UniqueName: \"kubernetes.io/projected/6be5923f-70ed-45b5-a747-d4008eaeb656-kube-api-access-9gm6x\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.418891 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6be5923f-70ed-45b5-a747-d4008eaeb656-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.472246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerStarted","Data":"f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.474852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" event={"ID":"6be5923f-70ed-45b5-a747-d4008eaeb656","Type":"ContainerDied","Data":"818a67c85ce926301db3afa89b1bb5c3ac9bbdbced8966f71ba1d63af4f883cc"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.474924 4739 scope.go:117] "RemoveContainer" containerID="3443e43c58386c804ca6165dd28e66e4ea94a17fafa09b78c69723fdb9a1bd18" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.474879 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-c68ds" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.477751 4739 generic.go:334] "Generic (PLEG): container finished" podID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerID="9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb" exitCode=0 Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.477830 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerDied","Data":"9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.479739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q6g47" event={"ID":"8daa97ee-3449-4043-8218-71aaa601c37c","Type":"ContainerStarted","Data":"c8b0788260d81388963cdc086497eb2881ef21cffe9b4a2c4758d7b22d5d9820"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.481500 4739 generic.go:334] "Generic (PLEG): container finished" podID="3866887c-44e3-4436-bd88-bbc56f572f77" containerID="edabb29e619ae1eeb2b3b44d914c9284ac1c7ae85b8069685bf0ec6983667b3d" exitCode=0 Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.481544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerDied","Data":"edabb29e619ae1eeb2b3b44d914c9284ac1c7ae85b8069685bf0ec6983667b3d"} Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.502725 4739 scope.go:117] "RemoveContainer" containerID="cfcd2e4e872e3af5710dab363dbe65580e2c5dc1a19ac0d3ddd18b7a4993a7cb" Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.527820 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:19:05 crc kubenswrapper[4739]: I0218 14:19:05.547980 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-c68ds"] Feb 18 14:19:06 crc kubenswrapper[4739]: I0218 14:19:06.059629 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 14:19:06 crc kubenswrapper[4739]: I0218 14:19:06.247033 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.423336 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" path="/var/lib/kubelet/pods/6be5923f-70ed-45b5-a747-d4008eaeb656/volumes" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.442811 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") pod \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.442956 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") pod \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.442982 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") pod \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\" (UID: \"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc\") " Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.453577 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg" (OuterVolumeSpecName: "kube-api-access-tcpzg") pod "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" (UID: "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc"). InnerVolumeSpecName "kube-api-access-tcpzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.502119 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config" (OuterVolumeSpecName: "config") pod "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" (UID: "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.507394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" event={"ID":"d3e2e1a1-a8f7-47c1-9964-399a7d9837fc","Type":"ContainerDied","Data":"8bde76f9b97130d02eb6cd439713bddac781417cc738a4a05c1874baac5770d7"} Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.507475 4739 scope.go:117] "RemoveContainer" containerID="9a8c6991c718d6822034294c3ea725bf4baae3bf25f08bd92ff340a388c73bdb" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.507612 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-q9846" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.515262 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" (UID: "d3e2e1a1-a8f7-47c1-9964-399a7d9837fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.520167 4739 generic.go:334] "Generic (PLEG): container finished" podID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerID="f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d" exitCode=0 Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.520251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerDied","Data":"f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d"} Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.556160 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcpzg\" (UniqueName: \"kubernetes.io/projected/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-kube-api-access-tcpzg\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.556187 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.556197 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.560119 4739 scope.go:117] "RemoveContainer" containerID="fb30ffa6dd77c2c26a2c94054232a01d5f2a2fce3604e07af9341e21e49fc7b5" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.629396 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q6g47" podStartSLOduration=4.629327718 podStartE2EDuration="4.629327718s" podCreationTimestamp="2026-02-18 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:06.571579177 +0000 UTC m=+1179.067300119" watchObservedRunningTime="2026-02-18 14:19:06.629327718 +0000 UTC m=+1179.125048640" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.866241 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:06.875805 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-q9846"] Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:07.535526 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerStarted","Data":"81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399"} Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:07.535909 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:07 crc kubenswrapper[4739]: I0218 14:19:07.560661 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podStartSLOduration=5.560642269 podStartE2EDuration="5.560642269s" podCreationTimestamp="2026-02-18 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:07.554146594 +0000 UTC m=+1180.049867546" watchObservedRunningTime="2026-02-18 14:19:07.560642269 +0000 UTC m=+1180.056363191" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.423253 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" path="/var/lib/kubelet/pods/d3e2e1a1-a8f7-47c1-9964-399a7d9837fc/volumes" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.548374 4739 generic.go:334] "Generic (PLEG): container finished" podID="acc9bbc5-8705-410b-977b-ca9245834e36" containerID="874c74820b18d639be27757d978d0db13d377177e4472870e9ded39d3bfa20c9" exitCode=0 Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.548465 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerDied","Data":"874c74820b18d639be27757d978d0db13d377177e4472870e9ded39d3bfa20c9"} Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.554261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerStarted","Data":"bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb"} Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.813018 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837235 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837610 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837622 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837637 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837642 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837658 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837664 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="init" Feb 18 14:19:08 crc kubenswrapper[4739]: E0218 14:19:08.837681 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837686 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837848 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e2e1a1-a8f7-47c1-9964-399a7d9837fc" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.837867 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be5923f-70ed-45b5-a747-d4008eaeb656" containerName="dnsmasq-dns" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.838828 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.863237 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:19:08 crc kubenswrapper[4739]: I0218 14:19:08.956963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009609 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009860 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.009895 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112608 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112774 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.112940 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.113087 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.113185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.113757 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.114380 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.114539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.139867 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"dnsmasq-dns-b8fbc5445-lgwdh\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.168134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.563400 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" containerID="cri-o://81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399" gracePeriod=10 Feb 18 14:19:09 crc kubenswrapper[4739]: I0218 14:19:09.666116 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:19:09 crc kubenswrapper[4739]: W0218 14:19:09.673494 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1ac31ff_21d1_41d9_9b77_15e64a2cd5f0.slice/crio-d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17 WatchSource:0}: Error finding container d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17: Status 404 returned error can't find the container with id d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17 Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.093357 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.100958 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103483 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103629 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103634 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.103645 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-l5wd5" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.120651 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgm4b\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-kube-api-access-bgm4b\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240758 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-lock\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.240993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-cache\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.241047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da69d20-d4af-4d8d-b1e1-5026676d2078-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.241083 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343535 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-lock\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343597 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-cache\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da69d20-d4af-4d8d-b1e1-5026676d2078-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343666 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgm4b\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-kube-api-access-bgm4b\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.343744 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.344225 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.344271 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.344334 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:10.844311021 +0000 UTC m=+1183.340032023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.344871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-lock\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.344986 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4da69d20-d4af-4d8d-b1e1-5026676d2078-cache\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.352768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da69d20-d4af-4d8d-b1e1-5026676d2078-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.365866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgm4b\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-kube-api-access-bgm4b\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.366380 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.366434 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/829f65a67044aa26f8514bb78b3970abc3028c65012918f695be6c1b9f081038/globalmount\"" pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.407270 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d330c6d-b770-4344-88bc-9a48597d53ae\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.573138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerStarted","Data":"d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17"} Feb 18 14:19:10 crc kubenswrapper[4739]: I0218 14:19:10.855351 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.855760 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.855851 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:10 crc kubenswrapper[4739]: E0218 14:19:10.856010 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:11.855987884 +0000 UTC m=+1184.351708826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:11 crc kubenswrapper[4739]: I0218 14:19:11.882079 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:11 crc kubenswrapper[4739]: E0218 14:19:11.882743 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:11 crc kubenswrapper[4739]: E0218 14:19:11.882802 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:11 crc kubenswrapper[4739]: E0218 14:19:11.882855 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:13.882835139 +0000 UTC m=+1186.378556061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.353142 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.725101 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cfjpx"] Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.731108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.733744 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.733753 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.734189 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.736345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cfjpx"] Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769742 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769828 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769845 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769919 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.769958 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877521 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877578 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877711 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877761 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877800 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.877863 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.879166 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.879171 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.880096 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.884527 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.889813 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.892766 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.901859 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"swift-ring-rebalance-cfjpx\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:13 crc kubenswrapper[4739]: I0218 14:19:13.980496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:13 crc kubenswrapper[4739]: E0218 14:19:13.980800 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:13 crc kubenswrapper[4739]: E0218 14:19:13.980835 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:13 crc kubenswrapper[4739]: E0218 14:19:13.980906 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:17.980877157 +0000 UTC m=+1190.476598079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:14 crc kubenswrapper[4739]: I0218 14:19:14.051764 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:16 crc kubenswrapper[4739]: I0218 14:19:16.296738 4739 generic.go:334] "Generic (PLEG): container finished" podID="3866887c-44e3-4436-bd88-bbc56f572f77" containerID="81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399" exitCode=0 Feb 18 14:19:16 crc kubenswrapper[4739]: I0218 14:19:16.296812 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerDied","Data":"81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399"} Feb 18 14:19:16 crc kubenswrapper[4739]: I0218 14:19:16.369171 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-58cc898c97-gzzx9" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" containerID="cri-o://0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f" gracePeriod=15 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.310069 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.311987 4739 generic.go:334] "Generic (PLEG): container finished" podID="869aa11b-eba7-4598-90dc-d50c642b9120" containerID="a3ef49497c95dfe6772ec7c1fb042eaa0e995bd29a78ec8447b2892bb58cef30" exitCode=0 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.312058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerDied","Data":"a3ef49497c95dfe6772ec7c1fb042eaa0e995bd29a78ec8447b2892bb58cef30"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.316940 4739 generic.go:334] "Generic (PLEG): container finished" podID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerID="444fdbf2047039f125d6d76b03e432e4f2458521013159c69b011aaf37854298" exitCode=0 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.317021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerDied","Data":"444fdbf2047039f125d6d76b03e432e4f2458521013159c69b011aaf37854298"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.320087 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58cc898c97-gzzx9_4cd95c4f-592d-4c7e-bdeb-ec99b168126b/console/0.log" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.320132 4739 generic.go:334] "Generic (PLEG): container finished" podID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerID="0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f" exitCode=2 Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.321262 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerDied","Data":"0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f"} Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.321304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.323211 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.342506 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.375498707 podStartE2EDuration="1m5.34248723s" podCreationTimestamp="2026-02-18 14:18:12 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.531954466 +0000 UTC m=+1129.027675398" lastFinishedPulling="2026-02-18 14:18:55.498942999 +0000 UTC m=+1167.994663921" observedRunningTime="2026-02-18 14:19:17.334064025 +0000 UTC m=+1189.829784947" watchObservedRunningTime="2026-02-18 14:19:17.34248723 +0000 UTC m=+1189.838208172" Feb 18 14:19:17 crc kubenswrapper[4739]: I0218 14:19:17.990265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:17 crc kubenswrapper[4739]: E0218 14:19:17.990436 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:17 crc kubenswrapper[4739]: E0218 14:19:17.990483 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:17 crc kubenswrapper[4739]: E0218 14:19:17.990533 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:25.990516586 +0000 UTC m=+1198.486237508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.354592 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.354784 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnhmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(fdf07d43-6839-4ae1-9efd-bd21557e31f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.358285 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" event={"ID":"3866887c-44e3-4436-bd88-bbc56f572f77","Type":"ContainerDied","Data":"01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515"} Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.358332 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e502c0c35d2ee85c29fc99b4dc57c774e5e7613cc900ffcd3868b38976b515" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.388902 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.431297 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-gf2dl" podStartSLOduration=15.431270732 podStartE2EDuration="15.431270732s" podCreationTimestamp="2026-02-18 14:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:17.404157191 +0000 UTC m=+1189.899878113" watchObservedRunningTime="2026-02-18 14:19:18.431270732 +0000 UTC m=+1190.926991664" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502248 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502389 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.502631 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") pod \"3866887c-44e3-4436-bd88-bbc56f572f77\" (UID: \"3866887c-44e3-4436-bd88-bbc56f572f77\") " Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.507989 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt" (OuterVolumeSpecName: "kube-api-access-sk2jt") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "kube-api-access-sk2jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.556389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.568878 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config" (OuterVolumeSpecName: "config") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.569302 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.569534 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7fh5d6h8dh574h586h548h5f8h657h84h9dh8chc6h84h5c7h57h8hc6h559h88h57h64dhb8h95h9fh647h67dh55ch65hffh559hb5h695q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:nf4hcfh556hddh557h5fbhd5h5dfh575h5b5h694h55ch55bh674h67h5d4hdfh5b9h54fh597hc7h598h9h8h568h5b5h55fh78h566h676h54h577q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:nc7h674h559h684h5c8h77hbfh55h5b8h5c6h5f9h5cdh75h67bh55fh67fh5f5h5bbh66ch4h556h558hbfh5dh57bh588h56dhc7h68h57chc4h86q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:n5cch5f9h7fhbhbch58fhd8h58bh659h5c5h67dh66fh5h6fh545hbh68dh685h5fdh676h599h679h5ffh585h5f6h5c5h588h667h676h575h5h7q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4lkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(b3be45be-9ee4-4114-b2e5-78d9b0341129): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.585344 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3866887c-44e3-4436-bd88-bbc56f572f77" (UID: "3866887c-44e3-4436-bd88-bbc56f572f77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607046 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607082 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607095 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk2jt\" (UniqueName: \"kubernetes.io/projected/3866887c-44e3-4436-bd88-bbc56f572f77-kube-api-access-sk2jt\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.607105 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3866887c-44e3-4436-bd88-bbc56f572f77-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:18 crc kubenswrapper[4739]: E0218 14:19:18.900260 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-northd-0" podUID="b3be45be-9ee4-4114-b2e5-78d9b0341129" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.934903 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58cc898c97-gzzx9_4cd95c4f-592d-4c7e-bdeb-ec99b168126b/console/0.log" Feb 18 14:19:18 crc kubenswrapper[4739]: I0218 14:19:18.934983 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.015522 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.016630 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.016942 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.018300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.018954 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca" (OuterVolumeSpecName: "service-ca") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.019123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.019297 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.020073 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.019438 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.020769 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") pod \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\" (UID: \"4cd95c4f-592d-4c7e-bdeb-ec99b168126b\") " Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.020083 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config" (OuterVolumeSpecName: "console-config") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022246 4739 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022277 4739 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022287 4739 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.022297 4739 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.026750 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c" (OuterVolumeSpecName: "kube-api-access-f7h7c") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "kube-api-access-f7h7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.026867 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.028675 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4cd95c4f-592d-4c7e-bdeb-ec99b168126b" (UID: "4cd95c4f-592d-4c7e-bdeb-ec99b168126b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.119406 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cfjpx"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.124002 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7h7c\" (UniqueName: \"kubernetes.io/projected/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-kube-api-access-f7h7c\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.124089 4739 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.124155 4739 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4cd95c4f-592d-4c7e-bdeb-ec99b168126b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.366301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerStarted","Data":"542842abdf2ee0753ae804a9cea526e4b6d5b0555fbd53a632bf6c534bb3371f"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.368666 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.373374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerStarted","Data":"bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.374360 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375599 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-58cc898c97-gzzx9_4cd95c4f-592d-4c7e-bdeb-ec99b168126b/console/0.log" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58cc898c97-gzzx9" event={"ID":"4cd95c4f-592d-4c7e-bdeb-ec99b168126b","Type":"ContainerDied","Data":"df9030b739dbc83cef12914ae8d05fcfaf3c9ae9c31af8304d4b753fc912b097"} Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375793 4739 scope.go:117] "RemoveContainer" containerID="0944c4f82b66901b45134e70e812dca310249100c057d0ce2374a1d9db397c6f" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.375922 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58cc898c97-gzzx9" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.380339 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.383219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3be45be-9ee4-4114-b2e5-78d9b0341129","Type":"ContainerStarted","Data":"6c7fb6f1999ca15fc619b1f0a7989fe1807e432b96c137cfc426b535e81aa656"} Feb 18 14:19:19 crc kubenswrapper[4739]: E0218 14:19:19.385097 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="b3be45be-9ee4-4114-b2e5-78d9b0341129" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.393361 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371971.461437 podStartE2EDuration="1m5.393337827s" podCreationTimestamp="2026-02-18 14:18:14 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.865636745 +0000 UTC m=+1129.361357667" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:19.38990865 +0000 UTC m=+1191.885629582" watchObservedRunningTime="2026-02-18 14:19:19.393337827 +0000 UTC m=+1191.889058749" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.436028 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" podStartSLOduration=11.436007274 podStartE2EDuration="11.436007274s" podCreationTimestamp="2026-02-18 14:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:19.433702615 +0000 UTC m=+1191.929423547" watchObservedRunningTime="2026-02-18 14:19:19.436007274 +0000 UTC m=+1191.931728196" Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.464319 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.472582 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mgk2p"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.481868 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:19:19 crc kubenswrapper[4739]: I0218 14:19:19.491113 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-58cc898c97-gzzx9"] Feb 18 14:19:20 crc kubenswrapper[4739]: E0218 14:19:20.412338 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="b3be45be-9ee4-4114-b2e5-78d9b0341129" Feb 18 14:19:20 crc kubenswrapper[4739]: I0218 14:19:20.479057 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" path="/var/lib/kubelet/pods/3866887c-44e3-4436-bd88-bbc56f572f77/volumes" Feb 18 14:19:20 crc kubenswrapper[4739]: I0218 14:19:20.480337 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" path="/var/lib/kubelet/pods/4cd95c4f-592d-4c7e-bdeb-ec99b168126b/volumes" Feb 18 14:19:21 crc kubenswrapper[4739]: I0218 14:19:21.748981 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:19:22 crc kubenswrapper[4739]: I0218 14:19:22.429036 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30"} Feb 18 14:19:23 crc kubenswrapper[4739]: I0218 14:19:23.355864 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-mgk2p" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Feb 18 14:19:23 crc kubenswrapper[4739]: I0218 14:19:23.451982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerStarted","Data":"74f496583eea24c7aa24787e4734e6c62cca95951d885c0cd6942e3b4f8ff69f"} Feb 18 14:19:23 crc kubenswrapper[4739]: I0218 14:19:23.468466 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cfjpx" podStartSLOduration=7.3844129800000005 podStartE2EDuration="10.468428222s" podCreationTimestamp="2026-02-18 14:19:13 +0000 UTC" firstStartedPulling="2026-02-18 14:19:19.125033683 +0000 UTC m=+1191.620754605" lastFinishedPulling="2026-02-18 14:19:22.209048925 +0000 UTC m=+1194.704769847" observedRunningTime="2026-02-18 14:19:23.467606041 +0000 UTC m=+1195.963326973" watchObservedRunningTime="2026-02-18 14:19:23.468428222 +0000 UTC m=+1195.964149144" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.172467 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.265935 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.266591 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-gf2dl" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" containerID="cri-o://bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb" gracePeriod=10 Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.595577 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.595946 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 14:19:24 crc kubenswrapper[4739]: I0218 14:19:24.862574 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 14:19:25 crc kubenswrapper[4739]: I0218 14:19:25.489340 4739 generic.go:334] "Generic (PLEG): container finished" podID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerID="bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb" exitCode=0 Feb 18 14:19:25 crc kubenswrapper[4739]: I0218 14:19:25.489411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerDied","Data":"bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb"} Feb 18 14:19:25 crc kubenswrapper[4739]: I0218 14:19:25.570023 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.009593 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.009902 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.080818 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.081012 4739 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.081037 4739 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.081100 4739 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift podName:4da69d20-d4af-4d8d-b1e1-5026676d2078 nodeName:}" failed. No retries permitted until 2026-02-18 14:19:42.081078389 +0000 UTC m=+1214.576799301 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift") pod "swift-storage-0" (UID: "4da69d20-d4af-4d8d-b1e1-5026676d2078") : configmap "swift-ring-files" not found Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.090406 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.595885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.886851 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.887348 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="init" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="init" Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.887387 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887394 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" Feb 18 14:19:26 crc kubenswrapper[4739]: E0218 14:19:26.887426 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887433 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887668 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd95c4f-592d-4c7e-bdeb-ec99b168126b" containerName="console" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.887688 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3866887c-44e3-4436-bd88-bbc56f572f77" containerName="dnsmasq-dns" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.888504 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.894317 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.901546 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.902963 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.944181 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:19:26 crc kubenswrapper[4739]: I0218 14:19:26.955759 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013382 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.013557 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.060499 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.062165 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.073171 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116046 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116243 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.116358 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.117101 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.117112 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.151841 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"glance-d1e3-account-create-update-27rvz\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.162072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"glance-db-create-nndld\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.217778 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.217904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.221153 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.240887 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.246662 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.247999 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.258753 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.288380 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.334955 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.335047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.335157 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.335227 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.336313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.357956 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"keystone-db-create-fwtxs\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.359013 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.360597 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.384093 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.393888 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.404037 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.436999 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.437194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.438052 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.473909 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"keystone-973a-account-create-update-lsz5w\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.474611 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:19:27 crc kubenswrapper[4739]: E0218 14:19:27.475088 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.475111 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" Feb 18 14:19:27 crc kubenswrapper[4739]: E0218 14:19:27.475148 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="init" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.475155 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="init" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.475337 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" containerName="dnsmasq-dns" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.476144 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.480759 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.484433 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540798 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.540965 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.541040 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") pod \"80f2df75-0584-449d-bd30-80aa45c8f5ff\" (UID: \"80f2df75-0584-449d-bd30-80aa45c8f5ff\") " Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.541460 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.541623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.549331 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz" (OuterVolumeSpecName: "kube-api-access-6rjsz") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "kube-api-access-6rjsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.552999 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-gf2dl" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.553768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-gf2dl" event={"ID":"80f2df75-0584-449d-bd30-80aa45c8f5ff","Type":"ContainerDied","Data":"6c0344dcd1980d3e621d946739f4b13130dbeab96724b311a0270793512ebb0c"} Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.553806 4739 scope.go:117] "RemoveContainer" containerID="bd4ca7eba39454221d510f944a98375576604027d6f8bc4b8cf191891479a9fb" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.611034 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.612754 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config" (OuterVolumeSpecName: "config") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.617763 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.622731 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.638237 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80f2df75-0584-449d-bd30-80aa45c8f5ff" (UID: "80f2df75-0584-449d-bd30-80aa45c8f5ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.640873 4739 scope.go:117] "RemoveContainer" containerID="f2cdf7655b497075da25ea2d8a12a5618350bcc5c996868ab38470ae9cd7ab7d" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643092 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643251 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643411 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643562 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643576 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643586 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80f2df75-0584-449d-bd30-80aa45c8f5ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.643599 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rjsz\" (UniqueName: \"kubernetes.io/projected/80f2df75-0584-449d-bd30-80aa45c8f5ff-kube-api-access-6rjsz\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.646019 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.662209 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"placement-db-create-x8lmx\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.680249 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.745184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.745611 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.746994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.763674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"placement-4dc5-account-create-update-shnqq\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.808847 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.900636 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.915561 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-gf2dl"] Feb 18 14:19:27 crc kubenswrapper[4739]: I0218 14:19:27.974071 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.054933 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.067562 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.389068 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.451724 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f2df75-0584-449d-bd30-80aa45c8f5ff" path="/var/lib/kubelet/pods/80f2df75-0584-449d-bd30-80aa45c8f5ff/volumes" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.613838 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.616247 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.620152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fwtxs" event={"ID":"075a587a-4bf2-43e9-8c63-1357e9cb05c9","Type":"ContainerStarted","Data":"9f0626a8e486de18d204ce8ce30bfe092ee4b300499982be629e59e5f5aca34d"} Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.626727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nndld" event={"ID":"b08bf9ca-ebbc-4d72-b227-20a5c7eed529","Type":"ContainerStarted","Data":"613a7d90de4a82a3a9fc510a8a51302f9fadb58e779fcf276967614f1d7b949a"} Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.629431 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.632686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1e3-account-create-update-27rvz" event={"ID":"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66","Type":"ContainerStarted","Data":"00d215ec78bf8c770cacf540ff66f3d4763867f9682f81bcb5a03fb3842969ec"} Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.727827 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.729354 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.737646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.739265 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.795483 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.795801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.795885 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.796002 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897559 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897819 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.897848 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: E0218 14:19:28.898916 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.900138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.900258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.918022 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"mysqld-exporter-84ff-account-create-update-9xb4v\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:28 crc kubenswrapper[4739]: I0218 14:19:28.919540 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"mysqld-exporter-openstack-db-create-m9bmk\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.078736 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.094289 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.110009 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.130267 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.156385 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:19:29 crc kubenswrapper[4739]: W0218 14:19:29.172953 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4c634d_6e65_4f6b_8001_0ac3e35a4801.slice/crio-54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155 WatchSource:0}: Error finding container 54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155: Status 404 returned error can't find the container with id 54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.194043 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.288480 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:19:29 crc kubenswrapper[4739]: W0218 14:19:29.399219 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8c94ce9_7b1b_43bd_9c93_303d0e675809.slice/crio-26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d WatchSource:0}: Error finding container 26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d: Status 404 returned error can't find the container with id 26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.658185 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:19:29 crc kubenswrapper[4739]: W0218 14:19:29.665134 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0275833c_ab0c_4865_9c6e_5c8d54a5e238.slice/crio-c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a WatchSource:0}: Error finding container c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a: Status 404 returned error can't find the container with id c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.666885 4739 generic.go:334] "Generic (PLEG): container finished" podID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerID="cbc19c6c86655aa18f2e8592ecad70f9e15a7d8e6a21338195448e4c95da6205" exitCode=0 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.666987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fwtxs" event={"ID":"075a587a-4bf2-43e9-8c63-1357e9cb05c9","Type":"ContainerDied","Data":"cbc19c6c86655aa18f2e8592ecad70f9e15a7d8e6a21338195448e4c95da6205"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.669888 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerID="0ff92f634c028d5fd31e4fe14bc0e896efd80534f8071fbf418f38d2b982dd3d" exitCode=0 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.669945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1e3-account-create-update-27rvz" event={"ID":"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66","Type":"ContainerDied","Data":"0ff92f634c028d5fd31e4fe14bc0e896efd80534f8071fbf418f38d2b982dd3d"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.677646 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerStarted","Data":"b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.677702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerStarted","Data":"d6646a29cf0de84fa8bed99394a55b7c9c035ddad6cd104b66ee80a2d71f20e1"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.690975 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerStarted","Data":"0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.691019 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerStarted","Data":"54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.702880 4739 generic.go:334] "Generic (PLEG): container finished" podID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerID="a772895e8b9301fae88d05626c6575b52b2a6a8650d7cff35a137c777919497f" exitCode=0 Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.703003 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nndld" event={"ID":"b08bf9ca-ebbc-4d72-b227-20a5c7eed529","Type":"ContainerDied","Data":"a772895e8b9301fae88d05626c6575b52b2a6a8650d7cff35a137c777919497f"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.718303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerStarted","Data":"b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.718347 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerStarted","Data":"26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.725151 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407"} Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.726693 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-973a-account-create-update-lsz5w" podStartSLOduration=2.726675104 podStartE2EDuration="2.726675104s" podCreationTimestamp="2026-02-18 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:29.716553917 +0000 UTC m=+1202.212274839" watchObservedRunningTime="2026-02-18 14:19:29.726675104 +0000 UTC m=+1202.222396026" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.754631 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4dc5-account-create-update-shnqq" podStartSLOduration=2.7546062559999998 podStartE2EDuration="2.754606256s" podCreationTimestamp="2026-02-18 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:29.728727527 +0000 UTC m=+1202.224448449" watchObservedRunningTime="2026-02-18 14:19:29.754606256 +0000 UTC m=+1202.250327188" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.774383 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-x8lmx" podStartSLOduration=2.774363579 podStartE2EDuration="2.774363579s" podCreationTimestamp="2026-02-18 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:29.75831268 +0000 UTC m=+1202.254033602" watchObservedRunningTime="2026-02-18 14:19:29.774363579 +0000 UTC m=+1202.270084501" Feb 18 14:19:29 crc kubenswrapper[4739]: I0218 14:19:29.812920 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:19:30 crc kubenswrapper[4739]: E0218 14:19:30.312079 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846b1cf2_bffb_4eca_a8f2_f3c0fcc7ac4b.slice/crio-aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0275833c_ab0c_4865_9c6e_5c8d54a5e238.slice/crio-conmon-06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.739932 4739 generic.go:334] "Generic (PLEG): container finished" podID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerID="0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.740054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerDied","Data":"0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.744868 4739 generic.go:334] "Generic (PLEG): container finished" podID="e1637477-36b3-4dea-b260-15b6e2532af8" containerID="b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.744957 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerDied","Data":"b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.747570 4739 generic.go:334] "Generic (PLEG): container finished" podID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerID="aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.747664 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerDied","Data":"aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.754339 4739 generic.go:334] "Generic (PLEG): container finished" podID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerID="b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.754469 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerDied","Data":"b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.759181 4739 generic.go:334] "Generic (PLEG): container finished" podID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerID="a765ba1e358815d14c909f560cbad1d380538cd7c1dacb154a2b8d05f4b98d09" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.759322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" event={"ID":"c50e4a24-ad83-4694-be4d-6b0811726c3d","Type":"ContainerDied","Data":"a765ba1e358815d14c909f560cbad1d380538cd7c1dacb154a2b8d05f4b98d09"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.759378 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" event={"ID":"c50e4a24-ad83-4694-be4d-6b0811726c3d","Type":"ContainerStarted","Data":"8d50214c2ea47b4d718d57a39461e35c1ec6d3d03c076b4695023973166e29bf"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.764746 4739 generic.go:334] "Generic (PLEG): container finished" podID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerID="06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.764858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" event={"ID":"0275833c-ab0c-4865-9c6e-5c8d54a5e238","Type":"ContainerDied","Data":"06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.764888 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" event={"ID":"0275833c-ab0c-4865-9c6e-5c8d54a5e238","Type":"ContainerStarted","Data":"c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a"} Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.767175 4739 generic.go:334] "Generic (PLEG): container finished" podID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerID="a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2" exitCode=0 Feb 18 14:19:30 crc kubenswrapper[4739]: I0218 14:19:30.767321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerDied","Data":"a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.429430 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zz64p" podUID="7289493d-f197-436b-bc45-84721d12c034" containerName="ovn-controller" probeResult="failure" output=< Feb 18 14:19:31 crc kubenswrapper[4739]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 14:19:31 crc kubenswrapper[4739]: > Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.445952 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.485424 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.488019 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cglq" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.570945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") pod \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.571049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") pod \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\" (UID: \"b08bf9ca-ebbc-4d72-b227-20a5c7eed529\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.573657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b08bf9ca-ebbc-4d72-b227-20a5c7eed529" (UID: "b08bf9ca-ebbc-4d72-b227-20a5c7eed529"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.577870 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.587768 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld" (OuterVolumeSpecName: "kube-api-access-9lpld") pod "b08bf9ca-ebbc-4d72-b227-20a5c7eed529" (UID: "b08bf9ca-ebbc-4d72-b227-20a5c7eed529"). InnerVolumeSpecName "kube-api-access-9lpld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.681047 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lpld\" (UniqueName: \"kubernetes.io/projected/b08bf9ca-ebbc-4d72-b227-20a5c7eed529-kube-api-access-9lpld\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.713591 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.721678 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766130 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:31 crc kubenswrapper[4739]: E0218 14:19:31.766645 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerName="mariadb-account-create-update" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766668 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerName="mariadb-account-create-update" Feb 18 14:19:31 crc kubenswrapper[4739]: E0218 14:19:31.766695 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766704 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: E0218 14:19:31.766722 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766729 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.766988 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" containerName="mariadb-account-create-update" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.767018 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.767028 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" containerName="mariadb-database-create" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.768829 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.772898 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784186 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") pod \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784270 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") pod \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\" (UID: \"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784416 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") pod \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.784639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") pod \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\" (UID: \"075a587a-4bf2-43e9-8c63-1357e9cb05c9\") " Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.785904 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "075a587a-4bf2-43e9-8c63-1357e9cb05c9" (UID: "075a587a-4bf2-43e9-8c63-1357e9cb05c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.786779 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" (UID: "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.793615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp" (OuterVolumeSpecName: "kube-api-access-7nplp") pod "075a587a-4bf2-43e9-8c63-1357e9cb05c9" (UID: "075a587a-4bf2-43e9-8c63-1357e9cb05c9"). InnerVolumeSpecName "kube-api-access-7nplp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.804176 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerStarted","Data":"420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.807585 4739 generic.go:334] "Generic (PLEG): container finished" podID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerID="a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655" exitCode=0 Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.807673 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerDied","Data":"a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.809048 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf" (OuterVolumeSpecName: "kube-api-access-htdjf") pod "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" (UID: "c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66"). InnerVolumeSpecName "kube-api-access-htdjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.812985 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerStarted","Data":"3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.818330 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.821870 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fwtxs" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.830568 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.830596 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fwtxs" event={"ID":"075a587a-4bf2-43e9-8c63-1357e9cb05c9","Type":"ContainerDied","Data":"9f0626a8e486de18d204ce8ce30bfe092ee4b300499982be629e59e5f5aca34d"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.830620 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0626a8e486de18d204ce8ce30bfe092ee4b300499982be629e59e5f5aca34d" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.836737 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-nndld" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.835969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-nndld" event={"ID":"b08bf9ca-ebbc-4d72-b227-20a5c7eed529","Type":"ContainerDied","Data":"613a7d90de4a82a3a9fc510a8a51302f9fadb58e779fcf276967614f1d7b949a"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.851011 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="613a7d90de4a82a3a9fc510a8a51302f9fadb58e779fcf276967614f1d7b949a" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.859214 4739 generic.go:334] "Generic (PLEG): container finished" podID="70500a97-2717-4761-884a-25cf8ab89380" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" exitCode=0 Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.859327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerDied","Data":"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.867420 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=27.22561994 podStartE2EDuration="1m13.8673985s" podCreationTimestamp="2026-02-18 14:18:18 +0000 UTC" firstStartedPulling="2026-02-18 14:18:44.095924675 +0000 UTC m=+1156.591645597" lastFinishedPulling="2026-02-18 14:19:30.737703235 +0000 UTC m=+1203.233424157" observedRunningTime="2026-02-18 14:19:31.846547259 +0000 UTC m=+1204.342268191" watchObservedRunningTime="2026-02-18 14:19:31.8673985 +0000 UTC m=+1204.363119442" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.887954 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.888372 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890059 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdjf\" (UniqueName: \"kubernetes.io/projected/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-kube-api-access-htdjf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890096 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890112 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nplp\" (UniqueName: \"kubernetes.io/projected/075a587a-4bf2-43e9-8c63-1357e9cb05c9-kube-api-access-7nplp\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.890123 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/075a587a-4bf2-43e9-8c63-1357e9cb05c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.891860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1e3-account-create-update-27rvz" event={"ID":"c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66","Type":"ContainerDied","Data":"00d215ec78bf8c770cacf540ff66f3d4763867f9682f81bcb5a03fb3842969ec"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.891904 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d215ec78bf8c770cacf540ff66f3d4763867f9682f81bcb5a03fb3842969ec" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.891975 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1e3-account-create-update-27rvz" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.903541 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerStarted","Data":"1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403"} Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.904592 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.942898 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.936845504 podStartE2EDuration="1m20.942871062s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.492961962 +0000 UTC m=+1128.988682894" lastFinishedPulling="2026-02-18 14:18:55.49898753 +0000 UTC m=+1167.994708452" observedRunningTime="2026-02-18 14:19:31.938662115 +0000 UTC m=+1204.434383047" watchObservedRunningTime="2026-02-18 14:19:31.942871062 +0000 UTC m=+1204.438592004" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.993365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.994321 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:31 crc kubenswrapper[4739]: I0218 14:19:31.999355 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.001413 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.001687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.002032 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.002410 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.002480 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.003320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.003356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.003761 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.033725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"ovn-controller-zz64p-config-rjp7j\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.085390 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371955.769403 podStartE2EDuration="1m21.085372532s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.538033798 +0000 UTC m=+1129.033754710" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:32.081928984 +0000 UTC m=+1204.577649916" watchObservedRunningTime="2026-02-18 14:19:32.085372532 +0000 UTC m=+1204.581093454" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.105628 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.587614 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.737023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") pod \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.737279 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") pod \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\" (UID: \"8e4c634d-6e65-4f6b-8001-0ac3e35a4801\") " Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.738269 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e4c634d-6e65-4f6b-8001-0ac3e35a4801" (UID: "8e4c634d-6e65-4f6b-8001-0ac3e35a4801"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.738968 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.746852 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg" (OuterVolumeSpecName: "kube-api-access-tsqzg") pod "8e4c634d-6e65-4f6b-8001-0ac3e35a4801" (UID: "8e4c634d-6e65-4f6b-8001-0ac3e35a4801"). InnerVolumeSpecName "kube-api-access-tsqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.844225 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsqzg\" (UniqueName: \"kubernetes.io/projected/8e4c634d-6e65-4f6b-8001-0ac3e35a4801-kube-api-access-tsqzg\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.903238 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.949113 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerStarted","Data":"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef"} Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.952655 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.958520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerStarted","Data":"86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6"} Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.958841 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.962606 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:32 crc kubenswrapper[4739]: E0218 14:19:32.963034 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerName="mariadb-database-create" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963054 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerName="mariadb-database-create" Feb 18 14:19:32 crc kubenswrapper[4739]: E0218 14:19:32.963091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerName="mariadb-account-create-update" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963098 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerName="mariadb-account-create-update" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963304 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" containerName="mariadb-database-create" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.963328 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" containerName="mariadb-account-create-update" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.967295 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.970971 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.991153 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.991291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-m9bmk" event={"ID":"0275833c-ab0c-4865-9c6e-5c8d54a5e238","Type":"ContainerDied","Data":"c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a"} Feb 18 14:19:32 crc kubenswrapper[4739]: I0218 14:19:32.991344 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c577b37bc548486a245d849da0df3c462ef996dd123f2fe9d21e5c0d211b304a" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.995561 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.997050 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4dc5-account-create-update-shnqq" event={"ID":"8e4c634d-6e65-4f6b-8001-0ac3e35a4801","Type":"ContainerDied","Data":"54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155"} Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.997089 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54492e6d106546731c753047a5db4d88768e53ebe0159a58f4f35c4a92c5b155" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:32.997129 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4dc5-account-create-update-shnqq" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.021317 4739 generic.go:334] "Generic (PLEG): container finished" podID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerID="74f496583eea24c7aa24787e4734e6c62cca95951d885c0cd6942e3b4f8ff69f" exitCode=0 Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.022230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerDied","Data":"74f496583eea24c7aa24787e4734e6c62cca95951d885c0cd6942e3b4f8ff69f"} Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.025272 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.139993 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") pod \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140079 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") pod \"e1637477-36b3-4dea-b260-15b6e2532af8\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140215 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") pod \"c50e4a24-ad83-4694-be4d-6b0811726c3d\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140321 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") pod \"e1637477-36b3-4dea-b260-15b6e2532af8\" (UID: \"e1637477-36b3-4dea-b260-15b6e2532af8\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") pod \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\" (UID: \"0275833c-ab0c-4865-9c6e-5c8d54a5e238\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.140391 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") pod \"c50e4a24-ad83-4694-be4d-6b0811726c3d\" (UID: \"c50e4a24-ad83-4694-be4d-6b0811726c3d\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.147357 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1637477-36b3-4dea-b260-15b6e2532af8" (UID: "e1637477-36b3-4dea-b260-15b6e2532af8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.147921 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0275833c-ab0c-4865-9c6e-5c8d54a5e238" (UID: "0275833c-ab0c-4865-9c6e-5c8d54a5e238"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.148389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c50e4a24-ad83-4694-be4d-6b0811726c3d" (UID: "c50e4a24-ad83-4694-be4d-6b0811726c3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.150555 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc" (OuterVolumeSpecName: "kube-api-access-m9mcc") pod "e1637477-36b3-4dea-b260-15b6e2532af8" (UID: "e1637477-36b3-4dea-b260-15b6e2532af8"). InnerVolumeSpecName "kube-api-access-m9mcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.161387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq" (OuterVolumeSpecName: "kube-api-access-bhjgq") pod "0275833c-ab0c-4865-9c6e-5c8d54a5e238" (UID: "0275833c-ab0c-4865-9c6e-5c8d54a5e238"). InnerVolumeSpecName "kube-api-access-bhjgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.162749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz" (OuterVolumeSpecName: "kube-api-access-ppqdz") pod "c50e4a24-ad83-4694-be4d-6b0811726c3d" (UID: "c50e4a24-ad83-4694-be4d-6b0811726c3d"). InnerVolumeSpecName "kube-api-access-ppqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.196043 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.214020 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.629427603 podStartE2EDuration="1m22.213991949s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.559907305 +0000 UTC m=+1129.055628227" lastFinishedPulling="2026-02-18 14:18:56.144471651 +0000 UTC m=+1168.640192573" observedRunningTime="2026-02-18 14:19:32.993217065 +0000 UTC m=+1205.488937997" watchObservedRunningTime="2026-02-18 14:19:33.213991949 +0000 UTC m=+1205.709712871" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.227605 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=42.692008945 podStartE2EDuration="1m22.227583095s" podCreationTimestamp="2026-02-18 14:18:11 +0000 UTC" firstStartedPulling="2026-02-18 14:18:16.605654428 +0000 UTC m=+1129.101375350" lastFinishedPulling="2026-02-18 14:18:56.141228578 +0000 UTC m=+1168.636949500" observedRunningTime="2026-02-18 14:19:33.040847429 +0000 UTC m=+1205.536568351" watchObservedRunningTime="2026-02-18 14:19:33.227583095 +0000 UTC m=+1205.723304037" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.243894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.243999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244277 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9mcc\" (UniqueName: \"kubernetes.io/projected/e1637477-36b3-4dea-b260-15b6e2532af8-kube-api-access-m9mcc\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244298 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0275833c-ab0c-4865-9c6e-5c8d54a5e238-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244307 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppqdz\" (UniqueName: \"kubernetes.io/projected/c50e4a24-ad83-4694-be4d-6b0811726c3d-kube-api-access-ppqdz\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244317 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhjgq\" (UniqueName: \"kubernetes.io/projected/0275833c-ab0c-4865-9c6e-5c8d54a5e238-kube-api-access-bhjgq\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244327 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1637477-36b3-4dea-b260-15b6e2532af8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.244335 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50e4a24-ad83-4694-be4d-6b0811726c3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.285513 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.345416 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") pod \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.345583 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") pod \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\" (UID: \"f8c94ce9-7b1b-43bd-9c93-303d0e675809\") " Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.345908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.346169 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.346648 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8c94ce9-7b1b-43bd-9c93-303d0e675809" (UID: "f8c94ce9-7b1b-43bd-9c93-303d0e675809"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.347038 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.363846 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq" (OuterVolumeSpecName: "kube-api-access-qdpkq") pod "f8c94ce9-7b1b-43bd-9c93-303d0e675809" (UID: "f8c94ce9-7b1b-43bd-9c93-303d0e675809"). InnerVolumeSpecName "kube-api-access-qdpkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.375374 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"root-account-create-update-j927w\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " pod="openstack/root-account-create-update-j927w" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.428261 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:33 crc kubenswrapper[4739]: W0218 14:19:33.439272 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42c00b9a_453b_4ec4_b98c_60547e6987ac.slice/crio-394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f WatchSource:0}: Error finding container 394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f: Status 404 returned error can't find the container with id 394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.454746 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdpkq\" (UniqueName: \"kubernetes.io/projected/f8c94ce9-7b1b-43bd-9c93-303d0e675809-kube-api-access-qdpkq\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.454793 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8c94ce9-7b1b-43bd-9c93-303d0e675809-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:33 crc kubenswrapper[4739]: I0218 14:19:33.586065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.035441 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.035476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-84ff-account-create-update-9xb4v" event={"ID":"c50e4a24-ad83-4694-be4d-6b0811726c3d","Type":"ContainerDied","Data":"8d50214c2ea47b4d718d57a39461e35c1ec6d3d03c076b4695023973166e29bf"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.036040 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d50214c2ea47b4d718d57a39461e35c1ec6d3d03c076b4695023973166e29bf" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.038966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerStarted","Data":"405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.039042 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerStarted","Data":"394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.040995 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-973a-account-create-update-lsz5w" event={"ID":"e1637477-36b3-4dea-b260-15b6e2532af8","Type":"ContainerDied","Data":"d6646a29cf0de84fa8bed99394a55b7c9c035ddad6cd104b66ee80a2d71f20e1"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.041039 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6646a29cf0de84fa8bed99394a55b7c9c035ddad6cd104b66ee80a2d71f20e1" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.041104 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-973a-account-create-update-lsz5w" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.047856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x8lmx" event={"ID":"f8c94ce9-7b1b-43bd-9c93-303d0e675809","Type":"ContainerDied","Data":"26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d"} Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.047917 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f6e1134e16bdbdb98e6a4ce05e0bd26a0a24d306555e5abd05bd34c7e3b00d" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.047937 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x8lmx" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.065016 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zz64p-config-rjp7j" podStartSLOduration=3.064995204 podStartE2EDuration="3.064995204s" podCreationTimestamp="2026-02-18 14:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:34.060197571 +0000 UTC m=+1206.555918493" watchObservedRunningTime="2026-02-18 14:19:34.064995204 +0000 UTC m=+1206.560716126" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.255696 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.689861 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790086 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790153 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790176 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790239 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790363 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.790383 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") pod \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\" (UID: \"ab89b7a2-642d-4a99-9eb4-f01b2990e75d\") " Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.791148 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.791431 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.798254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f" (OuterVolumeSpecName: "kube-api-access-twx6f") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "kube-api-access-twx6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.799687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.818789 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts" (OuterVolumeSpecName: "scripts") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.838615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.838925 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ab89b7a2-642d-4a99-9eb4-f01b2990e75d" (UID: "ab89b7a2-642d-4a99-9eb4-f01b2990e75d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893155 4739 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893197 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twx6f\" (UniqueName: \"kubernetes.io/projected/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-kube-api-access-twx6f\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893216 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893227 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893239 4739 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893249 4739 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:34 crc kubenswrapper[4739]: I0218 14:19:34.893258 4739 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ab89b7a2-642d-4a99-9eb4-f01b2990e75d-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.058668 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerStarted","Data":"4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.058724 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerStarted","Data":"8ce1f00e0dd0b9ea8a548b02136bb281984b25347c6ff94b43935c636e20b23c"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.061675 4739 generic.go:334] "Generic (PLEG): container finished" podID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerID="405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea" exitCode=0 Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.061836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerDied","Data":"405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.063527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cfjpx" event={"ID":"ab89b7a2-642d-4a99-9eb4-f01b2990e75d","Type":"ContainerDied","Data":"542842abdf2ee0753ae804a9cea526e4b6d5b0555fbd53a632bf6c534bb3371f"} Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.063571 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542842abdf2ee0753ae804a9cea526e4b6d5b0555fbd53a632bf6c534bb3371f" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.063586 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cfjpx" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.082146 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-j927w" podStartSLOduration=3.082123111 podStartE2EDuration="3.082123111s" podCreationTimestamp="2026-02-18 14:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:35.073324017 +0000 UTC m=+1207.569044949" watchObservedRunningTime="2026-02-18 14:19:35.082123111 +0000 UTC m=+1207.577844023" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.640761 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.640908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:35 crc kubenswrapper[4739]: I0218 14:19:35.646427 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.089602 4739 generic.go:334] "Generic (PLEG): container finished" podID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerID="4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647" exitCode=0 Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.092184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerDied","Data":"4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647"} Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.093397 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.391972 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zz64p" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.587824 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635288 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635369 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635458 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") pod \"42c00b9a-453b-4ec4-b98c-60547e6987ac\" (UID: \"42c00b9a-453b-4ec4-b98c-60547e6987ac\") " Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635478 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.635674 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run" (OuterVolumeSpecName: "var-run") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636066 4739 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636083 4739 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636092 4739 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/42c00b9a-453b-4ec4-b98c-60547e6987ac-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636290 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.636602 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts" (OuterVolumeSpecName: "scripts") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.641432 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf" (OuterVolumeSpecName: "kube-api-access-q5vrf") pod "42c00b9a-453b-4ec4-b98c-60547e6987ac" (UID: "42c00b9a-453b-4ec4-b98c-60547e6987ac"). InnerVolumeSpecName "kube-api-access-q5vrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.738121 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.738424 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5vrf\" (UniqueName: \"kubernetes.io/projected/42c00b9a-453b-4ec4-b98c-60547e6987ac-kube-api-access-q5vrf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:36 crc kubenswrapper[4739]: I0218 14:19:36.738538 4739 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/42c00b9a-453b-4ec4-b98c-60547e6987ac-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.153797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3be45be-9ee4-4114-b2e5-78d9b0341129","Type":"ContainerStarted","Data":"24aebdd733cf86d50f4d81a80351f3ecdfb5d71c209f40b4f4767559533e0933"} Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.154571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.160396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-rjp7j" event={"ID":"42c00b9a-453b-4ec4-b98c-60547e6987ac","Type":"ContainerDied","Data":"394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f"} Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.160460 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="394ca67d9f757b274fe81f49f3a126b93f363ba54100cbe81fa38f833aefaa6f" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.160560 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-rjp7j" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.206598 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.632131139 podStartE2EDuration="35.206574933s" podCreationTimestamp="2026-02-18 14:19:02 +0000 UTC" firstStartedPulling="2026-02-18 14:19:04.275681719 +0000 UTC m=+1176.771402641" lastFinishedPulling="2026-02-18 14:19:35.850125513 +0000 UTC m=+1208.345846435" observedRunningTime="2026-02-18 14:19:37.183245279 +0000 UTC m=+1209.678966221" watchObservedRunningTime="2026-02-18 14:19:37.206574933 +0000 UTC m=+1209.702295875" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237021 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237556 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237578 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237604 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerName="swift-ring-rebalance" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237614 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerName="swift-ring-rebalance" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237630 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237638 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237652 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerName="mariadb-database-create" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237659 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerName="mariadb-database-create" Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.237675 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerName="ovn-config" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237701 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerName="ovn-config" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237937 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab89b7a2-642d-4a99-9eb4-f01b2990e75d" containerName="swift-ring-rebalance" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237975 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" containerName="ovn-config" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.237993 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.238015 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" containerName="mariadb-database-create" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.238037 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.238940 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.243993 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.244199 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gvb8h" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.252560 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363690 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363788 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.363950 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467133 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.467467 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.477020 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.480155 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.491635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.496621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"glance-db-sync-gnm8m\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.579488 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.714502 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.777275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") pod \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.777910 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") pod \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\" (UID: \"009b4d4e-6b53-4e8d-a03e-79c96c50425b\") " Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.779224 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "009b4d4e-6b53-4e8d-a03e-79c96c50425b" (UID: "009b4d4e-6b53-4e8d-a03e-79c96c50425b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.790394 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh" (OuterVolumeSpecName: "kube-api-access-szljh") pod "009b4d4e-6b53-4e8d-a03e-79c96c50425b" (UID: "009b4d4e-6b53-4e8d-a03e-79c96c50425b"). InnerVolumeSpecName "kube-api-access-szljh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.790554 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.817704 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zz64p-config-rjp7j"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.881013 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/009b4d4e-6b53-4e8d-a03e-79c96c50425b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.881054 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szljh\" (UniqueName: \"kubernetes.io/projected/009b4d4e-6b53-4e8d-a03e-79c96c50425b-kube-api-access-szljh\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.883998 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:37 crc kubenswrapper[4739]: E0218 14:19:37.884537 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.884557 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.884837 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" containerName="mariadb-account-create-update" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.885773 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.892936 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.903225 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983063 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:37 crc kubenswrapper[4739]: I0218 14:19:37.983555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085176 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085199 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085291 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.085596 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.087325 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.087373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.088109 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.088612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.108866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"ovn-controller-zz64p-config-zqwr9\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.179154 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j927w" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.179548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j927w" event={"ID":"009b4d4e-6b53-4e8d-a03e-79c96c50425b","Type":"ContainerDied","Data":"8ce1f00e0dd0b9ea8a548b02136bb281984b25347c6ff94b43935c636e20b23c"} Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.179597 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce1f00e0dd0b9ea8a548b02136bb281984b25347c6ff94b43935c636e20b23c" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.225411 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.324377 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.456762 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c00b9a-453b-4ec4-b98c-60547e6987ac" path="/var/lib/kubelet/pods/42c00b9a-453b-4ec4-b98c-60547e6987ac/volumes" Feb 18 14:19:38 crc kubenswrapper[4739]: I0218 14:19:38.831433 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.048377 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.049742 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.064771 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.113774 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.116575 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.205670 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerStarted","Data":"2b55e9103d7f00a94e8592c5a8d14e8e0f69cd459f1c5013831102a48b6f0d28"} Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.207284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerStarted","Data":"8a12b70fe38cbeeedf9e3138a2a60817e675e6940e1ede9c344cc11b3e9be763"} Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.220861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.221201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.230621 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.261074 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"mysqld-exporter-openstack-cell1-db-create-n6kgm\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.307511 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.309184 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.315747 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.329803 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.375919 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.381747 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.382158 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" containerID="cri-o://20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30" gracePeriod=600 Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.382524 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" containerID="cri-o://420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29" gracePeriod=600 Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.382646 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" containerID="cri-o://33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407" gracePeriod=600 Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.425051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.425178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.462925 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.479745 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j927w"] Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.527093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.528265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.529007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.558612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"mysqld-exporter-d06e-account-create-update-nwqxj\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:39 crc kubenswrapper[4739]: I0218 14:19:39.643470 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.005592 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223711 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29" exitCode=0 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223751 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407" exitCode=0 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223765 4739 generic.go:334] "Generic (PLEG): container finished" podID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerID="20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30" exitCode=0 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223854 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.223867 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.226480 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerStarted","Data":"7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.230219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerStarted","Data":"aeecd2b89a671dca1be3ef9e35a978d5b8bb96c2f8a21345f57c0954a3cd475b"} Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.253469 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zz64p-config-zqwr9" podStartSLOduration=3.253416808 podStartE2EDuration="3.253416808s" podCreationTimestamp="2026-02-18 14:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:40.248979825 +0000 UTC m=+1212.744700757" watchObservedRunningTime="2026-02-18 14:19:40.253416808 +0000 UTC m=+1212.749137730" Feb 18 14:19:40 crc kubenswrapper[4739]: W0218 14:19:40.267647 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0b9a6cb_633e_4390_b1f9_048bc4a7a6ff.slice/crio-33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9 WatchSource:0}: Error finding container 33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9: Status 404 returned error can't find the container with id 33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9 Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.274819 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.429984 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009b4d4e-6b53-4e8d-a03e-79c96c50425b" path="/var/lib/kubelet/pods/009b4d4e-6b53-4e8d-a03e-79c96c50425b/volumes" Feb 18 14:19:40 crc kubenswrapper[4739]: I0218 14:19:40.641722 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.139:9090/-/ready\": dial tcp 10.217.0.139:9090: connect: connection refused" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.252917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerStarted","Data":"03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.255618 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerStarted","Data":"040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.255660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerStarted","Data":"33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.257516 4739 generic.go:334] "Generic (PLEG): container finished" podID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerID="7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3" exitCode=0 Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.257542 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerDied","Data":"7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3"} Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.280568 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" podStartSLOduration=2.280549609 podStartE2EDuration="2.280549609s" podCreationTimestamp="2026-02-18 14:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:41.277271745 +0000 UTC m=+1213.772992667" watchObservedRunningTime="2026-02-18 14:19:41.280549609 +0000 UTC m=+1213.776270531" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.331922 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" podStartSLOduration=2.331901707 podStartE2EDuration="2.331901707s" podCreationTimestamp="2026-02-18 14:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:41.328275005 +0000 UTC m=+1213.823995927" watchObservedRunningTime="2026-02-18 14:19:41.331901707 +0000 UTC m=+1213.827622629" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.520813 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596832 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596880 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.596898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.597134 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.597200 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") pod \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\" (UID: \"fdf07d43-6839-4ae1-9efd-bd21557e31f0\") " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.599948 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.600299 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.603883 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.604023 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.605574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config" (OuterVolumeSpecName: "config") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.608007 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt" (OuterVolumeSpecName: "kube-api-access-vnhmt") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "kube-api-access-vnhmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.610786 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.612574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out" (OuterVolumeSpecName: "config-out") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.638687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config" (OuterVolumeSpecName: "web-config") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.671455 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "fdf07d43-6839-4ae1-9efd-bd21557e31f0" (UID: "fdf07d43-6839-4ae1-9efd-bd21557e31f0"). InnerVolumeSpecName "pvc-065eb27a-babd-4c1e-9733-7075a750b869". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699699 4739 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699729 4739 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699739 4739 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699748 4739 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699758 4739 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699766 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnhmt\" (UniqueName: \"kubernetes.io/projected/fdf07d43-6839-4ae1-9efd-bd21557e31f0-kube-api-access-vnhmt\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699775 4739 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699784 4739 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fdf07d43-6839-4ae1-9efd-bd21557e31f0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699809 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") on node \"crc\" " Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.699820 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fdf07d43-6839-4ae1-9efd-bd21557e31f0-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.744209 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.744381 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-065eb27a-babd-4c1e-9733-7075a750b869" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869") on node "crc" Feb 18 14:19:41 crc kubenswrapper[4739]: I0218 14:19:41.801756 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.107493 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.118093 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4da69d20-d4af-4d8d-b1e1-5026676d2078-etc-swift\") pod \"swift-storage-0\" (UID: \"4da69d20-d4af-4d8d-b1e1-5026676d2078\") " pod="openstack/swift-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.222741 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.299783 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.302386 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"fdf07d43-6839-4ae1-9efd-bd21557e31f0","Type":"ContainerDied","Data":"f97314f9f73b65ab6d585d1190d55be82b1924ce7010a229a6c53d15da07f316"} Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.302473 4739 scope.go:117] "RemoveContainer" containerID="420239777de013111b55f9705b339d83a1c93dfa9079f1331da42bfce805ea29" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.401918 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.407056 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.439407 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" path="/var/lib/kubelet/pods/fdf07d43-6839-4ae1-9efd-bd21557e31f0/volumes" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.449594 4739 scope.go:117] "RemoveContainer" containerID="33e26c074fe392c233d18320191c667cb0f7939b2787e917560ff0fa66b0f407" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.456736 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457882 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="init-config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457902 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="init-config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457937 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457945 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457956 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457965 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: E0218 14:19:42.457988 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.457996 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.458249 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="thanos-sidecar" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.458266 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="config-reloader" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.458281 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf07d43-6839-4ae1-9efd-bd21557e31f0" containerName="prometheus" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.461276 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466006 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466312 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466501 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466662 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.466829 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.467137 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nz745" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.467178 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.476476 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.481656 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.487721 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.529887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.529935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.529986 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530012 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c16940-f153-4d15-891d-b0b91e9bce5a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530116 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530182 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhb24\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-kube-api-access-qhb24\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530215 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530230 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530369 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.530409 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.562149 4739 scope.go:117] "RemoveContainer" containerID="20e4696ddb81097644db58c7ff47cdd8db35bca8af8eb47dfd10333be0e9ab30" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.588035 4739 scope.go:117] "RemoveContainer" containerID="d130ba5106c46e0eaf379f38920ded0167eab599120dd5d9ffdf9b8b0e9aa0ac" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.636512 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.636755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.637426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640061 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640253 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.640334 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642655 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c16940-f153-4d15-891d-b0b91e9bce5a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhb24\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-kube-api-access-qhb24\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.642981 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.643012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.643045 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.644364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c16940-f153-4d15-891d-b0b91e9bce5a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.646990 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.647034 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.649309 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c16940-f153-4d15-891d-b0b91e9bce5a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.649866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.652898 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.654020 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.654104 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/01cfb519e92c9e23501f00a5b6c703ca97cb1b944d5fe5c6aa349ce505ad2fe2/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.663951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.676073 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.677693 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhb24\" (UniqueName: \"kubernetes.io/projected/06c16940-f153-4d15-891d-b0b91e9bce5a-kube-api-access-qhb24\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.679115 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c16940-f153-4d15-891d-b0b91e9bce5a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.724623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-065eb27a-babd-4c1e-9733-7075a750b869\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-065eb27a-babd-4c1e-9733-7075a750b869\") pod \"prometheus-metric-storage-0\" (UID: \"06c16940-f153-4d15-891d-b0b91e9bce5a\") " pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.809802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.938118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:42 crc kubenswrapper[4739]: I0218 14:19:42.965292 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 14:19:42 crc kubenswrapper[4739]: W0218 14:19:42.991185 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4da69d20_d4af_4d8d_b1e1_5026676d2078.slice/crio-351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4 WatchSource:0}: Error finding container 351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4: Status 404 returned error can't find the container with id 351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4 Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049550 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049649 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049819 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.049890 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050004 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") pod \"c9b1f63c-45e3-41c6-b25a-7136017ef699\" (UID: \"c9b1f63c-45e3-41c6-b25a-7136017ef699\") " Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050035 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050103 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run" (OuterVolumeSpecName: "var-run") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050466 4739 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050482 4739 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.050788 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.051109 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts" (OuterVolumeSpecName: "scripts") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.051690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.056993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87" (OuterVolumeSpecName: "kube-api-access-mtd87") pod "c9b1f63c-45e3-41c6-b25a-7136017ef699" (UID: "c9b1f63c-45e3-41c6-b25a-7136017ef699"). InnerVolumeSpecName "kube-api-access-mtd87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.097164 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.111082 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:43 crc kubenswrapper[4739]: E0218 14:19:43.111582 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerName="ovn-config" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.111604 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerName="ovn-config" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.111821 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" containerName="ovn-config" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.112542 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.116001 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.116363 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.123437 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.152317 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.152587 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153044 4739 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9b1f63c-45e3-41c6-b25a-7136017ef699-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153074 4739 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153086 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtd87\" (UniqueName: \"kubernetes.io/projected/c9b1f63c-45e3-41c6-b25a-7136017ef699-kube-api-access-mtd87\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.153099 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b1f63c-45e3-41c6-b25a-7136017ef699-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.217829 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.255189 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.255324 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.256054 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.275132 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"root-account-create-update-x4jss\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.287782 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.323681 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zz64p-config-zqwr9" event={"ID":"c9b1f63c-45e3-41c6-b25a-7136017ef699","Type":"ContainerDied","Data":"8a12b70fe38cbeeedf9e3138a2a60817e675e6940e1ede9c344cc11b3e9be763"} Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.323707 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zz64p-config-zqwr9" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.323725 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a12b70fe38cbeeedf9e3138a2a60817e675e6940e1ede9c344cc11b3e9be763" Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.326008 4739 generic.go:334] "Generic (PLEG): container finished" podID="4689ea28-dac4-434f-af87-18d6fc903330" containerID="03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5" exitCode=0 Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.326054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerDied","Data":"03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5"} Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.334227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"351e3fc279650f48ff5eac5dd9d1fabb9e666894ad4ff17a14301184bfcb26e4"} Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.369651 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 14:19:43 crc kubenswrapper[4739]: W0218 14:19:43.370263 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06c16940_f153_4d15_891d_b0b91e9bce5a.slice/crio-21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4 WatchSource:0}: Error finding container 21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4: Status 404 returned error can't find the container with id 21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4 Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.380768 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.392258 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zz64p-config-zqwr9"] Feb 18 14:19:43 crc kubenswrapper[4739]: I0218 14:19:43.436925 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.012653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.345682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerStarted","Data":"17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.346038 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerStarted","Data":"a10a503ee50917cfadfe83e9c1c13a6e8fa809f2ae7aa15a510e503bdb352de9"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.347407 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"21c32c2e9ede10b812a0ce894aef365f6cb819d7e6b19dec2850320bd8ff1ab4"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.349095 4739 generic.go:334] "Generic (PLEG): container finished" podID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerID="040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b" exitCode=0 Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.349180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerDied","Data":"040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b"} Feb 18 14:19:44 crc kubenswrapper[4739]: I0218 14:19:44.426667 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b1f63c-45e3-41c6-b25a-7136017ef699" path="/var/lib/kubelet/pods/c9b1f63c-45e3-41c6-b25a-7136017ef699/volumes" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.385764 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" event={"ID":"4689ea28-dac4-434f-af87-18d6fc903330","Type":"ContainerDied","Data":"aeecd2b89a671dca1be3ef9e35a978d5b8bb96c2f8a21345f57c0954a3cd475b"} Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.386293 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeecd2b89a671dca1be3ef9e35a978d5b8bb96c2f8a21345f57c0954a3cd475b" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.391188 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.445203 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-x4jss" podStartSLOduration=2.445177465 podStartE2EDuration="2.445177465s" podCreationTimestamp="2026-02-18 14:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:19:45.411977019 +0000 UTC m=+1217.907697951" watchObservedRunningTime="2026-02-18 14:19:45.445177465 +0000 UTC m=+1217.940898397" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.507157 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") pod \"4689ea28-dac4-434f-af87-18d6fc903330\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.507346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") pod \"4689ea28-dac4-434f-af87-18d6fc903330\" (UID: \"4689ea28-dac4-434f-af87-18d6fc903330\") " Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.508343 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4689ea28-dac4-434f-af87-18d6fc903330" (UID: "4689ea28-dac4-434f-af87-18d6fc903330"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.606615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf" (OuterVolumeSpecName: "kube-api-access-9xwmf") pod "4689ea28-dac4-434f-af87-18d6fc903330" (UID: "4689ea28-dac4-434f-af87-18d6fc903330"). InnerVolumeSpecName "kube-api-access-9xwmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.610079 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4689ea28-dac4-434f-af87-18d6fc903330-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.610112 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xwmf\" (UniqueName: \"kubernetes.io/projected/4689ea28-dac4-434f-af87-18d6fc903330-kube-api-access-9xwmf\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:45 crc kubenswrapper[4739]: I0218 14:19:45.945671 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.017422 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") pod \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.017556 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") pod \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\" (UID: \"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff\") " Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.018660 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" (UID: "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.025797 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57" (OuterVolumeSpecName: "kube-api-access-69l57") pod "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" (UID: "b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff"). InnerVolumeSpecName "kube-api-access-69l57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.120541 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.120574 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69l57\" (UniqueName: \"kubernetes.io/projected/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff-kube-api-access-69l57\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.412656 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.412804 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.435926 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d06e-account-create-update-nwqxj" event={"ID":"b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff","Type":"ContainerDied","Data":"33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9"} Feb 18 14:19:46 crc kubenswrapper[4739]: I0218 14:19:46.436048 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d26cb22868168d1870877e87114a811101b427ae3af5dd6ee1c17ae4c65bb9" Feb 18 14:19:47 crc kubenswrapper[4739]: I0218 14:19:47.421996 4739 generic.go:334] "Generic (PLEG): container finished" podID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerID="17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa" exitCode=0 Feb 18 14:19:47 crc kubenswrapper[4739]: I0218 14:19:47.422098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerDied","Data":"17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa"} Feb 18 14:19:47 crc kubenswrapper[4739]: I0218 14:19:47.424935 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"06f51eb38cceffa70932bdbeed465002f935500ebf3691d8f4a712f1d3ef416b"} Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.818437 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:19:49 crc kubenswrapper[4739]: E0218 14:19:49.819273 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerName="mariadb-account-create-update" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819292 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerName="mariadb-account-create-update" Feb 18 14:19:49 crc kubenswrapper[4739]: E0218 14:19:49.819315 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4689ea28-dac4-434f-af87-18d6fc903330" containerName="mariadb-database-create" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4689ea28-dac4-434f-af87-18d6fc903330" containerName="mariadb-database-create" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" containerName="mariadb-account-create-update" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.819608 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4689ea28-dac4-434f-af87-18d6fc903330" containerName="mariadb-database-create" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.820364 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.822700 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 14:19:49 crc kubenswrapper[4739]: I0218 14:19:49.834438 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.008942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.009005 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.009343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.111157 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.111210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.111312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.119323 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.122102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.144473 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"mysqld-exporter-0\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " pod="openstack/mysqld-exporter-0" Feb 18 14:19:50 crc kubenswrapper[4739]: I0218 14:19:50.442216 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.090910 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.114853 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.214139 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.286295 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:19:53 crc kubenswrapper[4739]: I0218 14:19:53.628049 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.715384 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.750742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") pod \"be735ec5-4c83-4f86-bffd-b42877b96df2\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.750799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") pod \"be735ec5-4c83-4f86-bffd-b42877b96df2\" (UID: \"be735ec5-4c83-4f86-bffd-b42877b96df2\") " Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.753315 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be735ec5-4c83-4f86-bffd-b42877b96df2" (UID: "be735ec5-4c83-4f86-bffd-b42877b96df2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.759514 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx" (OuterVolumeSpecName: "kube-api-access-tbhvx") pod "be735ec5-4c83-4f86-bffd-b42877b96df2" (UID: "be735ec5-4c83-4f86-bffd-b42877b96df2"). InnerVolumeSpecName "kube-api-access-tbhvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.853242 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbhvx\" (UniqueName: \"kubernetes.io/projected/be735ec5-4c83-4f86-bffd-b42877b96df2-kube-api-access-tbhvx\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:55 crc kubenswrapper[4739]: I0218 14:19:55.853278 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be735ec5-4c83-4f86-bffd-b42877b96df2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.084702 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.516908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerStarted","Data":"2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.518588 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerStarted","Data":"7802eb786f9fd65a5a871491a73453af4c3e9308ab2608296cd37aed4159f91a"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.525972 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-x4jss" Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.526874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-x4jss" event={"ID":"be735ec5-4c83-4f86-bffd-b42877b96df2","Type":"ContainerDied","Data":"a10a503ee50917cfadfe83e9c1c13a6e8fa809f2ae7aa15a510e503bdb352de9"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.526911 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a10a503ee50917cfadfe83e9c1c13a6e8fa809f2ae7aa15a510e503bdb352de9" Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.541998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"6364d38b568225606ea29cbbac819b4d068a82e4af7e2fe3065262d324d7595b"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.542058 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"98c9ca66c056cd72cee2dbfab7c52802a7407dbb78e0422f911a6292a8ab063e"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.542071 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"a4e66e06ee6342e149e83bd665b7b281fb957c43aed72691bcc66fc29591f16e"} Feb 18 14:19:56 crc kubenswrapper[4739]: I0218 14:19:56.555571 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gnm8m" podStartSLOduration=2.052010297 podStartE2EDuration="19.555547324s" podCreationTimestamp="2026-02-18 14:19:37 +0000 UTC" firstStartedPulling="2026-02-18 14:19:38.359391525 +0000 UTC m=+1210.855112447" lastFinishedPulling="2026-02-18 14:19:55.862928552 +0000 UTC m=+1228.358649474" observedRunningTime="2026-02-18 14:19:56.542568363 +0000 UTC m=+1229.038289295" watchObservedRunningTime="2026-02-18 14:19:56.555547324 +0000 UTC m=+1229.051268256" Feb 18 14:19:57 crc kubenswrapper[4739]: I0218 14:19:57.571177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"022adf5a640f63c67ffd622879ebd72c6fce8adb1af8152426ca84e9ab05b2b1"} Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.372740 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.373093 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.486019 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:19:59 crc kubenswrapper[4739]: I0218 14:19:59.494522 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-x4jss"] Feb 18 14:20:00 crc kubenswrapper[4739]: I0218 14:20:00.423277 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" path="/var/lib/kubelet/pods/be735ec5-4c83-4f86-bffd-b42877b96df2/volumes" Feb 18 14:20:00 crc kubenswrapper[4739]: I0218 14:20:00.599548 4739 generic.go:334] "Generic (PLEG): container finished" podID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerID="06f51eb38cceffa70932bdbeed465002f935500ebf3691d8f4a712f1d3ef416b" exitCode=0 Feb 18 14:20:00 crc kubenswrapper[4739]: I0218 14:20:00.599594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerDied","Data":"06f51eb38cceffa70932bdbeed465002f935500ebf3691d8f4a712f1d3ef416b"} Feb 18 14:20:01 crc kubenswrapper[4739]: I0218 14:20:01.611715 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerStarted","Data":"9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e"} Feb 18 14:20:01 crc kubenswrapper[4739]: I0218 14:20:01.616772 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"86c19524753499efa01c12762f90aea45a5f08487a361af23f3b7422ebef8ddc"} Feb 18 14:20:01 crc kubenswrapper[4739]: I0218 14:20:01.637523 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=7.468935149 podStartE2EDuration="12.637496488s" podCreationTimestamp="2026-02-18 14:19:49 +0000 UTC" firstStartedPulling="2026-02-18 14:19:56.085697757 +0000 UTC m=+1228.581418669" lastFinishedPulling="2026-02-18 14:20:01.254259086 +0000 UTC m=+1233.749980008" observedRunningTime="2026-02-18 14:20:01.628882357 +0000 UTC m=+1234.124603289" watchObservedRunningTime="2026-02-18 14:20:01.637496488 +0000 UTC m=+1234.133217410" Feb 18 14:20:02 crc kubenswrapper[4739]: I0218 14:20:02.631355 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"47f5ed40bbfc91c681580085413de127aa97c26338b3d29f55f0c76bcae69a71"} Feb 18 14:20:02 crc kubenswrapper[4739]: I0218 14:20:02.631724 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"982ee4400eca1d47aba235dc30f45a5c6d75edfdc3b76ef08c8d0cae89424fc5"} Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.093288 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.115197 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.215250 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.290083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.645702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"0d6224db6b1d6e414645a7cbe83e9521f38c527b68c3202589593299ce7f369c"} Feb 18 14:20:03 crc kubenswrapper[4739]: I0218 14:20:03.646830 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"c5405471c095e5ebc4f4f1e78f4e1b1a568f4fae2058b26294cc796be04ab829"} Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.523251 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:20:04 crc kubenswrapper[4739]: E0218 14:20:04.524427 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerName="mariadb-account-create-update" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.524881 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerName="mariadb-account-create-update" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.525540 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="be735ec5-4c83-4f86-bffd-b42877b96df2" containerName="mariadb-account-create-update" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.526897 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.529910 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.571074 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.651903 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.652087 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.754544 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.754685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.755273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.773462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"root-account-create-update-2t2n6\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:04 crc kubenswrapper[4739]: I0218 14:20:04.861043 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.629927 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.673261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"2d7eba74d22f044df34fcf837e65ca0f2e9a819ee38ddcd2edfc0c8d1ca54976"} Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.673307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c16940-f153-4d15-891d-b0b91e9bce5a","Type":"ContainerStarted","Data":"2514064716b3f6a4ca2240a403645f2b949cf1307be4e104acdf8555dd6f695f"} Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.676101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerStarted","Data":"26376d19c21e786b47736a5a91bcecd4e8d1a77a816ef99db75638cabc2785ad"} Feb 18 14:20:05 crc kubenswrapper[4739]: I0218 14:20:05.745947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.745930014 podStartE2EDuration="23.745930014s" podCreationTimestamp="2026-02-18 14:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:05.734187423 +0000 UTC m=+1238.229908365" watchObservedRunningTime="2026-02-18 14:20:05.745930014 +0000 UTC m=+1238.241650936" Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.687824 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerStarted","Data":"fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41"} Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.694076 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"08f1f5fdf0cc1b8ae1c2f6360da0f9744802d35e300a51dcbf03f8cbd0791ae3"} Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.694111 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"8ff2ed06d7bb7a33eef044e31c1e17c774d5a47a46df71234fadeed94140a689"} Feb 18 14:20:06 crc kubenswrapper[4739]: I0218 14:20:06.694124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"bf70db47279e4273c5b2a9187d41b22c243f16b3ace68331e4844615ed31e986"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.712139 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"c201776fb96bab117bcbb2847e1926f7b7ab16c52d0e274cbd21b4dfa2dc8812"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.712750 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"cd3e0ca5aeb4f731de00ccbe044fc94f659ea6e13f34bc55d0f10e50f7b38d27"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.712768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"5c9deabeebe1cca4f87bce896721938f30e09175649f837fbc18025790d74574"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.715069 4739 generic.go:334] "Generic (PLEG): container finished" podID="f1df0b15-6927-4300-b034-6b5c3308320d" containerID="fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41" exitCode=0 Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.715119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerDied","Data":"fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41"} Feb 18 14:20:07 crc kubenswrapper[4739]: I0218 14:20:07.810296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:08 crc kubenswrapper[4739]: I0218 14:20:08.731908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4da69d20-d4af-4d8d-b1e1-5026676d2078","Type":"ContainerStarted","Data":"784b00045d7b56ff771b3e749626f57d6b1b5dae332b4dc6eb4708c5bf3ddaa3"} Feb 18 14:20:08 crc kubenswrapper[4739]: I0218 14:20:08.776246 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.130451193 podStartE2EDuration="59.776228753s" podCreationTimestamp="2026-02-18 14:19:09 +0000 UTC" firstStartedPulling="2026-02-18 14:19:42.999175834 +0000 UTC m=+1215.494896756" lastFinishedPulling="2026-02-18 14:20:05.644953394 +0000 UTC m=+1238.140674316" observedRunningTime="2026-02-18 14:20:08.772415565 +0000 UTC m=+1241.268136517" watchObservedRunningTime="2026-02-18 14:20:08.776228753 +0000 UTC m=+1241.271949675" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.071966 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.074333 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.077048 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.099007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178027 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.178419 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.188575 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.280267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") pod \"f1df0b15-6927-4300-b034-6b5c3308320d\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.280548 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") pod \"f1df0b15-6927-4300-b034-6b5c3308320d\" (UID: \"f1df0b15-6927-4300-b034-6b5c3308320d\") " Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281170 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281204 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281219 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1df0b15-6927-4300-b034-6b5c3308320d" (UID: "f1df0b15-6927-4300-b034-6b5c3308320d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281244 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281332 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.281488 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1df0b15-6927-4300-b034-6b5c3308320d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.282185 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.283073 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.283899 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.283991 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.284657 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.292657 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46" (OuterVolumeSpecName: "kube-api-access-tnk46") pod "f1df0b15-6927-4300-b034-6b5c3308320d" (UID: "f1df0b15-6927-4300-b034-6b5c3308320d"). InnerVolumeSpecName "kube-api-access-tnk46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.301190 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"dnsmasq-dns-5c79d794d7-jf2xn\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.383794 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnk46\" (UniqueName: \"kubernetes.io/projected/f1df0b15-6927-4300-b034-6b5c3308320d-kube-api-access-tnk46\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.485419 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.751829 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2t2n6" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.751880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2t2n6" event={"ID":"f1df0b15-6927-4300-b034-6b5c3308320d","Type":"ContainerDied","Data":"26376d19c21e786b47736a5a91bcecd4e8d1a77a816ef99db75638cabc2785ad"} Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.753085 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26376d19c21e786b47736a5a91bcecd4e8d1a77a816ef99db75638cabc2785ad" Feb 18 14:20:09 crc kubenswrapper[4739]: I0218 14:20:09.991009 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:10 crc kubenswrapper[4739]: I0218 14:20:10.763363 4739 generic.go:334] "Generic (PLEG): container finished" podID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerID="0af0be098f1f2e90f6517909dc969ea837f11c0c5020ec683a860a135d91b0f1" exitCode=0 Feb 18 14:20:10 crc kubenswrapper[4739]: I0218 14:20:10.763580 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerDied","Data":"0af0be098f1f2e90f6517909dc969ea837f11c0c5020ec683a860a135d91b0f1"} Feb 18 14:20:10 crc kubenswrapper[4739]: I0218 14:20:10.763998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerStarted","Data":"4d2046f9d4641d243874fd60e2cf83edd0111ff1d89b77492ced2775ebec2c2c"} Feb 18 14:20:11 crc kubenswrapper[4739]: I0218 14:20:11.775980 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerStarted","Data":"56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c"} Feb 18 14:20:11 crc kubenswrapper[4739]: I0218 14:20:11.776757 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:11 crc kubenswrapper[4739]: I0218 14:20:11.808883 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" podStartSLOduration=2.808865961 podStartE2EDuration="2.808865961s" podCreationTimestamp="2026-02-18 14:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:11.802274932 +0000 UTC m=+1244.297995854" watchObservedRunningTime="2026-02-18 14:20:11.808865961 +0000 UTC m=+1244.304586883" Feb 18 14:20:12 crc kubenswrapper[4739]: I0218 14:20:12.810056 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:12 crc kubenswrapper[4739]: I0218 14:20:12.816556 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:13 crc kubenswrapper[4739]: I0218 14:20:13.118843 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 18 14:20:13 crc kubenswrapper[4739]: I0218 14:20:13.217405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 18 14:20:13 crc kubenswrapper[4739]: I0218 14:20:13.801734 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.326250 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:20:15 crc kubenswrapper[4739]: E0218 14:20:15.326975 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" containerName="mariadb-account-create-update" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.326989 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" containerName="mariadb-account-create-update" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.327197 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" containerName="mariadb-account-create-update" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.327862 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.339956 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.416298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.416780 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.498707 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.500370 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.503465 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.508264 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.510318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.519754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.519892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.520638 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.529907 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.536718 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.568819 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.570376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.594139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"heat-db-create-tzg9c\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.620846 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622231 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622303 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.622501 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.647205 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.656428 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.657970 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.660598 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.705653 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725320 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725692 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725765 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725811 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.725969 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.726385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.729533 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.768206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"heat-c4dd-account-create-update-xvgtp\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.769116 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"cinder-db-create-4km74\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.828984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.829407 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.829509 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.829592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.830857 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.831375 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.837374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.869804 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.897902 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"barbican-db-create-rlcgk\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.932177 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"cinder-1ad6-account-create-update-pz97t\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.952824 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.957488 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:15 crc kubenswrapper[4739]: I0218 14:20:15.988968 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.115004 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.118040 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.126414 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.126635 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.127992 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.128762 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.179677 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.180161 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.191129 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.211181 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.211316 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.244734 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.249572 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.264552 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.277208 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.319877 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320575 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320653 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.320810 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.322351 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.336710 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.338221 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.342408 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.361686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.363154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"neutron-db-create-6lzcd\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.456788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.456872 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.457394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.457530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.459086 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.479795 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.486979 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.503321 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.515779 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"keystone-db-sync-gsm82\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.561135 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.562107 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.562626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.562989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.563055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.592883 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"neutron-64f1-account-create-update-9xxvd\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.597751 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.638865 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.671847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.671938 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.674642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.693703 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"barbican-d1d2-account-create-update-spvtj\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.752344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.763032 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.897766 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerStarted","Data":"2fcc9ff0ee9ec1bf3215bb73da9c8794568d2a01e795bd13f6b0ee5f76cb462a"} Feb 18 14:20:16 crc kubenswrapper[4739]: I0218 14:20:16.978840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.051489 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.164246 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:20:17 crc kubenswrapper[4739]: W0218 14:20:17.182793 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e60ca77_b621_4dfc_8b92_89d8cad06bf0.slice/crio-dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a WatchSource:0}: Error finding container dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a: Status 404 returned error can't find the container with id dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.483384 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.767769 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.786491 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:20:17 crc kubenswrapper[4739]: W0218 14:20:17.795784 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c90e24b_98c5_4e26_8819_a5ae1aef1102.slice/crio-edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81 WatchSource:0}: Error finding container edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81: Status 404 returned error can't find the container with id edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81 Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.799686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.882931 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:20:17 crc kubenswrapper[4739]: W0218 14:20:17.901771 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbeb37ff_68ee_4cc5_add5_18fc25605b6f.slice/crio-10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa WatchSource:0}: Error finding container 10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa: Status 404 returned error can't find the container with id 10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.910904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerStarted","Data":"1a2360048a15096079dd7c59ee1514d1f0b25699b543e5c5cc39d05d95a5037b"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.916105 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerStarted","Data":"6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.916160 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerStarted","Data":"dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.918678 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerStarted","Data":"aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.941718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerStarted","Data":"6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.942059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerStarted","Data":"b59a0b6d590e4cc3c7b35ff633fe05c48128b7f0135fc72689f314a250c98f12"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.945075 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6lzcd" event={"ID":"f06df363-1196-4ba5-a5ba-d6e6c419a9d2","Type":"ContainerStarted","Data":"b57050692b3cf280eb19a6dc458c5f9ebf852ff24130bca1673c550837aa8f06"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.948988 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerStarted","Data":"983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.949044 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerStarted","Data":"5274d2c880f0b37137d00033ef51b4576afb98ee116a2c189de7881559882ace"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.955877 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerStarted","Data":"0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.955935 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerStarted","Data":"fa910e243a5121f5d39cb671e037cfa3b198d87a07aad38c1c529812ffbef96b"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.959047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerStarted","Data":"edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81"} Feb 18 14:20:17 crc kubenswrapper[4739]: I0218 14:20:17.974399 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-rlcgk" podStartSLOduration=2.974376338 podStartE2EDuration="2.974376338s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:17.939518314 +0000 UTC m=+1250.435239256" watchObservedRunningTime="2026-02-18 14:20:17.974376338 +0000 UTC m=+1250.470097260" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.000318 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-tzg9c" podStartSLOduration=3.000287213 podStartE2EDuration="3.000287213s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:17.959439965 +0000 UTC m=+1250.455160887" watchObservedRunningTime="2026-02-18 14:20:18.000287213 +0000 UTC m=+1250.496008145" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.009105 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-c4dd-account-create-update-xvgtp" podStartSLOduration=3.009077758 podStartE2EDuration="3.009077758s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:17.982361873 +0000 UTC m=+1250.478082795" watchObservedRunningTime="2026-02-18 14:20:18.009077758 +0000 UTC m=+1250.504798690" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.031258 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-1ad6-account-create-update-pz97t" podStartSLOduration=3.031233077 podStartE2EDuration="3.031233077s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:18.000134919 +0000 UTC m=+1250.495855851" watchObservedRunningTime="2026-02-18 14:20:18.031233077 +0000 UTC m=+1250.526953999" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.042511 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4km74" podStartSLOduration=3.042484795 podStartE2EDuration="3.042484795s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:18.014887857 +0000 UTC m=+1250.510608789" watchObservedRunningTime="2026-02-18 14:20:18.042484795 +0000 UTC m=+1250.538205717" Feb 18 14:20:18 crc kubenswrapper[4739]: I0218 14:20:18.988253 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerStarted","Data":"76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.050329 4739 generic.go:334] "Generic (PLEG): container finished" podID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerID="6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.050429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerDied","Data":"6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.052406 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerID="e1cc91021e3962c425b43e910f166ba0094177006eafab98477f0ed269daa076" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.052465 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6lzcd" event={"ID":"f06df363-1196-4ba5-a5ba-d6e6c419a9d2","Type":"ContainerDied","Data":"e1cc91021e3962c425b43e910f166ba0094177006eafab98477f0ed269daa076"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.054497 4739 generic.go:334] "Generic (PLEG): container finished" podID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerID="983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.054538 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerDied","Data":"983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.072049 4739 generic.go:334] "Generic (PLEG): container finished" podID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerID="0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.072124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerDied","Data":"0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.079906 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64f1-account-create-update-9xxvd" podStartSLOduration=4.079886089 podStartE2EDuration="4.079886089s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:19.049762456 +0000 UTC m=+1251.545483378" watchObservedRunningTime="2026-02-18 14:20:19.079886089 +0000 UTC m=+1251.575607011" Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.118811 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerStarted","Data":"f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.137782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerStarted","Data":"10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.163785 4739 generic.go:334] "Generic (PLEG): container finished" podID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerID="6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.163903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerDied","Data":"6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.185973 4739 generic.go:334] "Generic (PLEG): container finished" podID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerID="aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e" exitCode=0 Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.186041 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerDied","Data":"aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e"} Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.311327 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d1d2-account-create-update-spvtj" podStartSLOduration=3.311300436 podStartE2EDuration="3.311300436s" podCreationTimestamp="2026-02-18 14:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:19.299553184 +0000 UTC m=+1251.795274106" watchObservedRunningTime="2026-02-18 14:20:19.311300436 +0000 UTC m=+1251.807021358" Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.486612 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.565331 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:20:19 crc kubenswrapper[4739]: I0218 14:20:19.581287 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" containerID="cri-o://bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738" gracePeriod=10 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.199026 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerID="f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59" exitCode=0 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.199123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerDied","Data":"f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59"} Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.207332 4739 generic.go:334] "Generic (PLEG): container finished" podID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerID="76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d" exitCode=0 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.207426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerDied","Data":"76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d"} Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.209729 4739 generic.go:334] "Generic (PLEG): container finished" podID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerID="bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738" exitCode=0 Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.209936 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerDied","Data":"bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738"} Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.359290 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541323 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541465 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541630 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541843 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.541874 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") pod \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\" (UID: \"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0\") " Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.552719 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl" (OuterVolumeSpecName: "kube-api-access-zpddl") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "kube-api-access-zpddl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.628095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.649009 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpddl\" (UniqueName: \"kubernetes.io/projected/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-kube-api-access-zpddl\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.649041 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.665258 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.713271 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.753149 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.753780 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.802042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config" (OuterVolumeSpecName: "config") pod "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" (UID: "b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.862497 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:20 crc kubenswrapper[4739]: I0218 14:20:20.895229 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.052863 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.065012 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.070672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") pod \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.070766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") pod \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\" (UID: \"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.081867 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.082957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b" (OuterVolumeSpecName: "kube-api-access-jqq5b") pod "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" (UID: "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a"). InnerVolumeSpecName "kube-api-access-jqq5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.084095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" (UID: "26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.138794 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.148693 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174350 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") pod \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174420 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") pod \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") pod \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\" (UID: \"39bd8e39-8e54-46e1-8217-dbdd74be8a8c\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") pod \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.174929 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") pod \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\" (UID: \"f06df363-1196-4ba5-a5ba-d6e6c419a9d2\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.175034 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") pod \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\" (UID: \"da457314-f1eb-477e-93c7-cf0d01e0f1e1\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.175726 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.175751 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqq5b\" (UniqueName: \"kubernetes.io/projected/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a-kube-api-access-jqq5b\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.176041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f06df363-1196-4ba5-a5ba-d6e6c419a9d2" (UID: "f06df363-1196-4ba5-a5ba-d6e6c419a9d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.176370 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39bd8e39-8e54-46e1-8217-dbdd74be8a8c" (UID: "39bd8e39-8e54-46e1-8217-dbdd74be8a8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.182878 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z" (OuterVolumeSpecName: "kube-api-access-8fk9z") pod "39bd8e39-8e54-46e1-8217-dbdd74be8a8c" (UID: "39bd8e39-8e54-46e1-8217-dbdd74be8a8c"). InnerVolumeSpecName "kube-api-access-8fk9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.184013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da457314-f1eb-477e-93c7-cf0d01e0f1e1" (UID: "da457314-f1eb-477e-93c7-cf0d01e0f1e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.186510 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2" (OuterVolumeSpecName: "kube-api-access-qc2b2") pod "f06df363-1196-4ba5-a5ba-d6e6c419a9d2" (UID: "f06df363-1196-4ba5-a5ba-d6e6c419a9d2"). InnerVolumeSpecName "kube-api-access-qc2b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.197831 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh" (OuterVolumeSpecName: "kube-api-access-8p6qh") pod "da457314-f1eb-477e-93c7-cf0d01e0f1e1" (UID: "da457314-f1eb-477e-93c7-cf0d01e0f1e1"). InnerVolumeSpecName "kube-api-access-8p6qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.274828 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6lzcd" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.274837 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6lzcd" event={"ID":"f06df363-1196-4ba5-a5ba-d6e6c419a9d2","Type":"ContainerDied","Data":"b57050692b3cf280eb19a6dc458c5f9ebf852ff24130bca1673c550837aa8f06"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.275613 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b57050692b3cf280eb19a6dc458c5f9ebf852ff24130bca1673c550837aa8f06" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.276725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") pod \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.277329 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") pod \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\" (UID: \"4e60ca77-b621-4dfc-8b92-89d8cad06bf0\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.277597 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") pod \"20e0fc8a-5942-417e-9fbb-4f94536db193\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.277757 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") pod \"20e0fc8a-5942-417e-9fbb-4f94536db193\" (UID: \"20e0fc8a-5942-417e-9fbb-4f94536db193\") " Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278864 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278888 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278902 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc2b2\" (UniqueName: \"kubernetes.io/projected/f06df363-1196-4ba5-a5ba-d6e6c419a9d2-kube-api-access-qc2b2\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.278915 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p6qh\" (UniqueName: \"kubernetes.io/projected/da457314-f1eb-477e-93c7-cf0d01e0f1e1-kube-api-access-8p6qh\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.279113 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da457314-f1eb-477e-93c7-cf0d01e0f1e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.279153 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fk9z\" (UniqueName: \"kubernetes.io/projected/39bd8e39-8e54-46e1-8217-dbdd74be8a8c-kube-api-access-8fk9z\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.282513 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20e0fc8a-5942-417e-9fbb-4f94536db193" (UID: "20e0fc8a-5942-417e-9fbb-4f94536db193"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.282769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e60ca77-b621-4dfc-8b92-89d8cad06bf0" (UID: "4e60ca77-b621-4dfc-8b92-89d8cad06bf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.286524 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5" (OuterVolumeSpecName: "kube-api-access-jpqw5") pod "4e60ca77-b621-4dfc-8b92-89d8cad06bf0" (UID: "4e60ca77-b621-4dfc-8b92-89d8cad06bf0"). InnerVolumeSpecName "kube-api-access-jpqw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.291599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" event={"ID":"b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0","Type":"ContainerDied","Data":"d2bcc5bdfd6b01d7eae8c031aa45506d66a71e0990ef1e90815d622f0b826c17"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.291682 4739 scope.go:117] "RemoveContainer" containerID="bd2acd3a75008df77a9a70e8c10e031a2f47232a877e8beae462dd4837d94738" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.291891 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lgwdh" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.302166 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs" (OuterVolumeSpecName: "kube-api-access-92lcs") pod "20e0fc8a-5942-417e-9fbb-4f94536db193" (UID: "20e0fc8a-5942-417e-9fbb-4f94536db193"). InnerVolumeSpecName "kube-api-access-92lcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.308857 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4km74" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.308870 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4km74" event={"ID":"da457314-f1eb-477e-93c7-cf0d01e0f1e1","Type":"ContainerDied","Data":"5274d2c880f0b37137d00033ef51b4576afb98ee116a2c189de7881559882ace"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.308951 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5274d2c880f0b37137d00033ef51b4576afb98ee116a2c189de7881559882ace" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.311415 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1ad6-account-create-update-pz97t" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.311416 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1ad6-account-create-update-pz97t" event={"ID":"39bd8e39-8e54-46e1-8217-dbdd74be8a8c","Type":"ContainerDied","Data":"fa910e243a5121f5d39cb671e037cfa3b198d87a07aad38c1c529812ffbef96b"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.311885 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa910e243a5121f5d39cb671e037cfa3b198d87a07aad38c1c529812ffbef96b" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.313509 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rlcgk" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.313522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rlcgk" event={"ID":"4e60ca77-b621-4dfc-8b92-89d8cad06bf0","Type":"ContainerDied","Data":"dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.313690 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfe4981324d6ed3c0d0658d5618590362273e2ee59332611ea9c220eff9097a" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.317486 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tzg9c" event={"ID":"26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a","Type":"ContainerDied","Data":"2fcc9ff0ee9ec1bf3215bb73da9c8794568d2a01e795bd13f6b0ee5f76cb462a"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.317556 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fcc9ff0ee9ec1bf3215bb73da9c8794568d2a01e795bd13f6b0ee5f76cb462a" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.317726 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tzg9c" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.325229 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c4dd-account-create-update-xvgtp" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.325508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c4dd-account-create-update-xvgtp" event={"ID":"20e0fc8a-5942-417e-9fbb-4f94536db193","Type":"ContainerDied","Data":"b59a0b6d590e4cc3c7b35ff633fe05c48128b7f0135fc72689f314a250c98f12"} Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.325553 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b59a0b6d590e4cc3c7b35ff633fe05c48128b7f0135fc72689f314a250c98f12" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.346973 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.355374 4739 scope.go:117] "RemoveContainer" containerID="444fdbf2047039f125d6d76b03e432e4f2458521013159c69b011aaf37854298" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.356539 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lgwdh"] Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391038 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391106 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpqw5\" (UniqueName: \"kubernetes.io/projected/4e60ca77-b621-4dfc-8b92-89d8cad06bf0-kube-api-access-jpqw5\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391123 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e0fc8a-5942-417e-9fbb-4f94536db193-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.391137 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92lcs\" (UniqueName: \"kubernetes.io/projected/20e0fc8a-5942-417e-9fbb-4f94536db193-kube-api-access-92lcs\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.911916 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:21 crc kubenswrapper[4739]: I0218 14:20:21.922652 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.012262 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") pod \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.012541 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") pod \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\" (UID: \"2c90e24b-98c5-4e26-8819-a5ae1aef1102\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.014867 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c90e24b-98c5-4e26-8819-a5ae1aef1102" (UID: "2c90e24b-98c5-4e26-8819-a5ae1aef1102"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.032724 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d" (OuterVolumeSpecName: "kube-api-access-8zr6d") pod "2c90e24b-98c5-4e26-8819-a5ae1aef1102" (UID: "2c90e24b-98c5-4e26-8819-a5ae1aef1102"). InnerVolumeSpecName "kube-api-access-8zr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.118952 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") pod \"4d208990-8bd6-4b82-bba8-200f5c7985d0\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.118986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") pod \"4d208990-8bd6-4b82-bba8-200f5c7985d0\" (UID: \"4d208990-8bd6-4b82-bba8-200f5c7985d0\") " Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.119856 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zr6d\" (UniqueName: \"kubernetes.io/projected/2c90e24b-98c5-4e26-8819-a5ae1aef1102-kube-api-access-8zr6d\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.119882 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c90e24b-98c5-4e26-8819-a5ae1aef1102-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.122350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d208990-8bd6-4b82-bba8-200f5c7985d0" (UID: "4d208990-8bd6-4b82-bba8-200f5c7985d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.130709 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg" (OuterVolumeSpecName: "kube-api-access-6vqvg") pod "4d208990-8bd6-4b82-bba8-200f5c7985d0" (UID: "4d208990-8bd6-4b82-bba8-200f5c7985d0"). InnerVolumeSpecName "kube-api-access-6vqvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.221736 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vqvg\" (UniqueName: \"kubernetes.io/projected/4d208990-8bd6-4b82-bba8-200f5c7985d0-kube-api-access-6vqvg\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.222958 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d208990-8bd6-4b82-bba8-200f5c7985d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.347265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d1d2-account-create-update-spvtj" event={"ID":"2c90e24b-98c5-4e26-8819-a5ae1aef1102","Type":"ContainerDied","Data":"edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81"} Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.347310 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf655321c9334ae71b9620b789ac350b12afd8f8dad87241641d9fc65e18d81" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.347377 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d1d2-account-create-update-spvtj" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.364818 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f1-account-create-update-9xxvd" event={"ID":"4d208990-8bd6-4b82-bba8-200f5c7985d0","Type":"ContainerDied","Data":"1a2360048a15096079dd7c59ee1514d1f0b25699b543e5c5cc39d05d95a5037b"} Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.364882 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2360048a15096079dd7c59ee1514d1f0b25699b543e5c5cc39d05d95a5037b" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.364964 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f1-account-create-update-9xxvd" Feb 18 14:20:22 crc kubenswrapper[4739]: I0218 14:20:22.433263 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" path="/var/lib/kubelet/pods/b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0/volumes" Feb 18 14:20:23 crc kubenswrapper[4739]: I0218 14:20:23.379896 4739 generic.go:334] "Generic (PLEG): container finished" podID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerID="2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661" exitCode=0 Feb 18 14:20:23 crc kubenswrapper[4739]: I0218 14:20:23.380008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerDied","Data":"2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661"} Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.580847 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706308 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706480 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.706730 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") pod \"edf3454e-4ac2-42a7-98b1-0f43065764c2\" (UID: \"edf3454e-4ac2-42a7-98b1-0f43065764c2\") " Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.712099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm" (OuterVolumeSpecName: "kube-api-access-bzphm") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "kube-api-access-bzphm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.723856 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.738987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.772141 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data" (OuterVolumeSpecName: "config-data") pod "edf3454e-4ac2-42a7-98b1-0f43065764c2" (UID: "edf3454e-4ac2-42a7-98b1-0f43065764c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809094 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809129 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzphm\" (UniqueName: \"kubernetes.io/projected/edf3454e-4ac2-42a7-98b1-0f43065764c2-kube-api-access-bzphm\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809147 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:25 crc kubenswrapper[4739]: I0218 14:20:25.809185 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf3454e-4ac2-42a7-98b1-0f43065764c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.490194 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerStarted","Data":"008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104"} Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.507517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnm8m" event={"ID":"edf3454e-4ac2-42a7-98b1-0f43065764c2","Type":"ContainerDied","Data":"2b55e9103d7f00a94e8592c5a8d14e8e0f69cd459f1c5013831102a48b6f0d28"} Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.507568 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b55e9103d7f00a94e8592c5a8d14e8e0f69cd459f1c5013831102a48b6f0d28" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.507654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnm8m" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.543036 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gsm82" podStartSLOduration=3.87687706 podStartE2EDuration="11.543010055s" podCreationTimestamp="2026-02-18 14:20:15 +0000 UTC" firstStartedPulling="2026-02-18 14:20:17.90858474 +0000 UTC m=+1250.404305662" lastFinishedPulling="2026-02-18 14:20:25.574717735 +0000 UTC m=+1258.070438657" observedRunningTime="2026-02-18 14:20:26.528494023 +0000 UTC m=+1259.024214945" watchObservedRunningTime="2026-02-18 14:20:26.543010055 +0000 UTC m=+1259.038730987" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995206 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995898 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995912 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995925 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995933 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995943 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995950 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995968 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995974 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.995983 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.995989 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996009 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="init" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996016 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="init" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996028 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerName="glance-db-sync" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996034 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerName="glance-db-sync" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996046 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996052 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996062 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996069 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996077 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996083 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: E0218 14:20:26.996094 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996099 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996268 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" containerName="glance-db-sync" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996280 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996293 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996304 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996313 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996326 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ac31ff-21d1-41d9-9b77-15e64a2cd5f0" containerName="dnsmasq-dns" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996337 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" containerName="mariadb-database-create" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996349 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996360 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.996375 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" containerName="mariadb-account-create-update" Feb 18 14:20:26 crc kubenswrapper[4739]: I0218 14:20:26.997405 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.017634 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142023 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142287 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142342 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142389 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.142556 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244776 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244846 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244890 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.244941 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.245666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.245787 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.245825 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.246062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.246082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.266253 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"dnsmasq-dns-5f59b8f679-lc9pz\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.316820 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:27 crc kubenswrapper[4739]: W0218 14:20:27.840470 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda95a3e0d_f263_464b_9406_0fc51724a068.slice/crio-e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8 WatchSource:0}: Error finding container e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8: Status 404 returned error can't find the container with id e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8 Feb 18 14:20:27 crc kubenswrapper[4739]: I0218 14:20:27.842087 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:28 crc kubenswrapper[4739]: I0218 14:20:28.566151 4739 generic.go:334] "Generic (PLEG): container finished" podID="a95a3e0d-f263-464b-9406-0fc51724a068" containerID="521ee440b42cc6ac855fe6f696353905b77bad514b6fa532070f2cedd7a11e27" exitCode=0 Feb 18 14:20:28 crc kubenswrapper[4739]: I0218 14:20:28.566301 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerDied","Data":"521ee440b42cc6ac855fe6f696353905b77bad514b6fa532070f2cedd7a11e27"} Feb 18 14:20:28 crc kubenswrapper[4739]: I0218 14:20:28.566840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerStarted","Data":"e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8"} Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.373506 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.374089 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.590619 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerStarted","Data":"2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc"} Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.590774 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:29 crc kubenswrapper[4739]: I0218 14:20:29.617348 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podStartSLOduration=3.617326893 podStartE2EDuration="3.617326893s" podCreationTimestamp="2026-02-18 14:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:29.609278706 +0000 UTC m=+1262.104999638" watchObservedRunningTime="2026-02-18 14:20:29.617326893 +0000 UTC m=+1262.113047815" Feb 18 14:20:36 crc kubenswrapper[4739]: I0218 14:20:36.672440 4739 generic.go:334] "Generic (PLEG): container finished" podID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerID="008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104" exitCode=0 Feb 18 14:20:36 crc kubenswrapper[4739]: I0218 14:20:36.672540 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerDied","Data":"008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104"} Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.319347 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.389484 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.389768 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" containerID="cri-o://56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c" gracePeriod=10 Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.684980 4739 generic.go:334] "Generic (PLEG): container finished" podID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerID="56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c" exitCode=0 Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.685049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerDied","Data":"56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c"} Feb 18 14:20:37 crc kubenswrapper[4739]: I0218 14:20:37.968978 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081038 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081332 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081436 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081616 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081878 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.081986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") pod \"449c4682-2359-4fcc-8578-fd524beaf6d6\" (UID: \"449c4682-2359-4fcc-8578-fd524beaf6d6\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.093277 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d" (OuterVolumeSpecName: "kube-api-access-24b8d") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "kube-api-access-24b8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.141780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.149078 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.160099 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.170849 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config" (OuterVolumeSpecName: "config") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185551 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185587 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185599 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185614 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24b8d\" (UniqueName: \"kubernetes.io/projected/449c4682-2359-4fcc-8578-fd524beaf6d6-kube-api-access-24b8d\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.185626 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.188116 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "449c4682-2359-4fcc-8578-fd524beaf6d6" (UID: "449c4682-2359-4fcc-8578-fd524beaf6d6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.205269 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287270 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") pod \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") pod \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287532 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") pod \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\" (UID: \"dbeb37ff-68ee-4cc5-add5-18fc25605b6f\") " Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.287970 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/449c4682-2359-4fcc-8578-fd524beaf6d6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.291331 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987" (OuterVolumeSpecName: "kube-api-access-lc987") pod "dbeb37ff-68ee-4cc5-add5-18fc25605b6f" (UID: "dbeb37ff-68ee-4cc5-add5-18fc25605b6f"). InnerVolumeSpecName "kube-api-access-lc987". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.328031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbeb37ff-68ee-4cc5-add5-18fc25605b6f" (UID: "dbeb37ff-68ee-4cc5-add5-18fc25605b6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.349607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data" (OuterVolumeSpecName: "config-data") pod "dbeb37ff-68ee-4cc5-add5-18fc25605b6f" (UID: "dbeb37ff-68ee-4cc5-add5-18fc25605b6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.390375 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.390416 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.390427 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc987\" (UniqueName: \"kubernetes.io/projected/dbeb37ff-68ee-4cc5-add5-18fc25605b6f-kube-api-access-lc987\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.694793 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" event={"ID":"449c4682-2359-4fcc-8578-fd524beaf6d6","Type":"ContainerDied","Data":"4d2046f9d4641d243874fd60e2cf83edd0111ff1d89b77492ced2775ebec2c2c"} Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.694849 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-jf2xn" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.694860 4739 scope.go:117] "RemoveContainer" containerID="56f03329df21428f26d15e7ee78eafa34d6e85bde858c22c00ae4b6f3ec7369c" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.696873 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gsm82" event={"ID":"dbeb37ff-68ee-4cc5-add5-18fc25605b6f","Type":"ContainerDied","Data":"10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa"} Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.696900 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e793196c49816c98522b2b831956b794e78cf95f2276d50891f9592e2570fa" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.696920 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gsm82" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.728375 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.729196 4739 scope.go:117] "RemoveContainer" containerID="0af0be098f1f2e90f6517909dc969ea837f11c0c5020ec683a860a135d91b0f1" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.749175 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-jf2xn"] Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960291 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:38 crc kubenswrapper[4739]: E0218 14:20:38.960781 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="init" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960808 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="init" Feb 18 14:20:38 crc kubenswrapper[4739]: E0218 14:20:38.960833 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960841 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" Feb 18 14:20:38 crc kubenswrapper[4739]: E0218 14:20:38.960860 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerName="keystone-db-sync" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.960867 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerName="keystone-db-sync" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.961082 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" containerName="dnsmasq-dns" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.961100 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" containerName="keystone-db-sync" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.962167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:38 crc kubenswrapper[4739]: I0218 14:20:38.992430 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009416 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009844 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.009890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.010173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.021988 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.023902 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.028372 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.028586 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.028773 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.029136 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.029414 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.054664 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.189133 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.189935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190051 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190197 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190359 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190469 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190634 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190720 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.190996 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.191083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.193537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.194342 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.199565 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.199897 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.200145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.204585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.224264 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.226393 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.242456 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.255242 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gcstc" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.255611 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.257303 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"dnsmasq-dns-bbf5cc879-sdzrr\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.281580 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301787 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301894 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301923 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.301993 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302048 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302149 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.302267 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.327618 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.328113 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.331285 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.342431 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.357095 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.359531 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.369871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.371198 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9bgt9" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.371484 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.371600 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.377810 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"keystone-bootstrap-pffpk\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.406901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407013 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407050 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407071 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407105 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407140 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407167 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407192 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.407260 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.414216 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.414373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.415092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.512213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"heat-db-sync-2dhxm\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516246 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516502 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.516632 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.522280 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.522362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.527013 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.538708 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.560389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.641227 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"cinder-db-sync-hm27f\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.668527 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.670991 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.673986 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.686799 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.687267 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-f4jrj" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.696555 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.696743 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.698177 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.714825 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.715054 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-crc55" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.715196 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.727216 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.727554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.728024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.749545 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.752535 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.779586 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.780167 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.798769 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830351 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830402 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830498 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830522 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830582 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.830640 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.855345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.877181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.888243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"neutron-db-sync-hc8hk\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.906163 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.914530 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.916837 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.948518 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952792 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952852 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.952980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.953003 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.957399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.970585 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.971357 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:39 crc kubenswrapper[4739]: I0218 14:20:39.990239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.014811 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.016594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.016976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"placement-db-sync-q58nf\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.038089 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.038139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xnq4d" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.038368 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.067161 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.071989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072083 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072225 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.072290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184454 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184565 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184901 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184941 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.184996 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.185039 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.185922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.186474 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.188273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.188489 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.188912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.215927 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.218622 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.237967 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.243188 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.265364 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"dnsmasq-dns-56df8fb6b7-7mcdv\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.296159 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.296271 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.296352 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.304202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.304578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.305719 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.306633 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.351684 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.354305 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.355199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"barbican-db-sync-h5s86\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.363469 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.364119 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.364271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gvb8h" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.364376 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.370479 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:20:40 crc kubenswrapper[4739]: W0218 14:20:40.376934 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6a350a1_b153_4edb_b937_ff7ccec8d1de.slice/crio-9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473 WatchSource:0}: Error finding container 9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473: Status 404 returned error can't find the container with id 9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473 Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.378116 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401114 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401481 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401519 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401655 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.401812 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.460129 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449c4682-2359-4fcc-8578-fd524beaf6d6" path="/var/lib/kubelet/pods/449c4682-2359-4fcc-8578-fd524beaf6d6/volumes" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.461514 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.465363 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.465493 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.469084 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.469130 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.506368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.506424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.506467 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510605 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510677 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510729 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510960 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.510999 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511081 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.511789 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.516198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.519091 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.540485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.546683 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.554660 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.589229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.601627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613718 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613842 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613920 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.613951 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614023 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614135 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614252 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614342 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614396 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.614547 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.615998 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.616361 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.620552 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.622599 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.626440 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.626499 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f742b1b3d6273dd3375e0e5a76a4c01f047ef0c4f7f8765a09ef674c2c3b6349/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.634759 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.637233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.645869 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.669208 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.698065 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " pod="openstack/glance-default-external-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.722861 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.722976 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723113 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723163 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723203 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.723249 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.724162 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.725509 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.732466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.733412 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.733466 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bd6abac90ebac69ac03837941e4aa1820f14a49ea1b1fe31e1dd216b0487447/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.741371 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.742469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.753940 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.787649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.824825 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.840660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerStarted","Data":"70d11242c01619e7bdfd32d0a6252d06f3b61a6d441fcbc7ab28b9bd66c4286b"} Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.844671 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" event={"ID":"e6a350a1-b153-4edb-b937-ff7ccec8d1de","Type":"ContainerStarted","Data":"9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473"} Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.873894 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.901736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:20:40 crc kubenswrapper[4739]: I0218 14:20:40.985101 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.337717 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.412142 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:20:41 crc kubenswrapper[4739]: W0218 14:20:41.572840 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3697715_3f94_4086_99ab_65a492bd7542.slice/crio-7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404 WatchSource:0}: Error finding container 7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404: Status 404 returned error can't find the container with id 7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404 Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.589682 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.604744 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.857229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerStarted","Data":"ab3a872330660cb89409af9b912cee12aa6ccbf272a46a86fd90d8fd6dc9f4c2"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.860522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerStarted","Data":"7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.867063 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerStarted","Data":"0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.868649 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerStarted","Data":"b800d2e5f20a2d68b8e0f58bfc2fa70fc222830a78f8d8d41068e13af2965ba2"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.872320 4739 generic.go:334] "Generic (PLEG): container finished" podID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerID="1779a8f6e311441460ae687923fa5a4909e3214be09805f17629a2dc2d3a75ca" exitCode=0 Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.872525 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" event={"ID":"e6a350a1-b153-4edb-b937-ff7ccec8d1de","Type":"ContainerDied","Data":"1779a8f6e311441460ae687923fa5a4909e3214be09805f17629a2dc2d3a75ca"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.878665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerStarted","Data":"1870c4359d29029459a4d3730dceade0333f6df6959a787f14729f3d6e56a8fd"} Feb 18 14:20:41 crc kubenswrapper[4739]: I0218 14:20:41.939729 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pffpk" podStartSLOduration=3.939711038 podStartE2EDuration="3.939711038s" podCreationTimestamp="2026-02-18 14:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:41.886810711 +0000 UTC m=+1274.382531653" watchObservedRunningTime="2026-02-18 14:20:41.939711038 +0000 UTC m=+1274.435431960" Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.047402 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:20:42 crc kubenswrapper[4739]: W0218 14:20:42.054138 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b54fe6_91fa_4ba1_9a4e_135277494a27.slice/crio-6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7 WatchSource:0}: Error finding container 6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7: Status 404 returned error can't find the container with id 6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7 Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.079582 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.255012 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:42 crc kubenswrapper[4739]: W0218 14:20:42.263608 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddee39188_8dd1_45dd_afd8_ef4599d03adb.slice/crio-3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7 WatchSource:0}: Error finding container 3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7: Status 404 returned error can't find the container with id 3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7 Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.388313 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.730124 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:42 crc kubenswrapper[4739]: I0218 14:20:42.942713 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.022277 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.127810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerStarted","Data":"d2307342ad946d88b327f9c4998f5fef25fdf0715d6dc8137505b684ccb0bf1f"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.172389 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerStarted","Data":"3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.202599 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerStarted","Data":"615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.206457 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.207281 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerStarted","Data":"31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.207305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerStarted","Data":"6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.225516 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hc8hk" podStartSLOduration=4.225502462 podStartE2EDuration="4.225502462s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:43.223875571 +0000 UTC m=+1275.719596503" watchObservedRunningTime="2026-02-18 14:20:43.225502462 +0000 UTC m=+1275.721223384" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.258299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"012dc8f477dfe3bd25f7fe5decf6c00cb3c850250a18972e074f41544b597e70"} Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259097 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259134 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259231 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259278 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.259382 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.263632 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") pod \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\" (UID: \"e6a350a1-b153-4edb-b937-ff7ccec8d1de\") " Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.326480 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck" (OuterVolumeSpecName: "kube-api-access-w4nck") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "kube-api-access-w4nck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.334130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.350204 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.378328 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4nck\" (UniqueName: \"kubernetes.io/projected/e6a350a1-b153-4edb-b937-ff7ccec8d1de-kube-api-access-w4nck\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.378366 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.411817 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.426754 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config" (OuterVolumeSpecName: "config") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.431904 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.447261 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6a350a1-b153-4edb-b937-ff7ccec8d1de" (UID: "e6a350a1-b153-4edb-b937-ff7ccec8d1de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480185 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480490 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480560 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:43 crc kubenswrapper[4739]: I0218 14:20:43.480620 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a350a1-b153-4edb-b937-ff7ccec8d1de-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.272900 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerID="31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f" exitCode=0 Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.273037 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerDied","Data":"31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.276387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerStarted","Data":"d7195d297c9d5141a71387652075a97edc794fb733f7afeadd4dd323957a1f63"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.280022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerStarted","Data":"a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.286674 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.288517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-sdzrr" event={"ID":"e6a350a1-b153-4edb-b937-ff7ccec8d1de","Type":"ContainerDied","Data":"9e14d3ee166aded1d7a8910ebecdb1eccbc4c5aab0200432ebde4cfc1c5a5473"} Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.288567 4739 scope.go:117] "RemoveContainer" containerID="1779a8f6e311441460ae687923fa5a4909e3214be09805f17629a2dc2d3a75ca" Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.532012 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:44 crc kubenswrapper[4739]: I0218 14:20:44.554732 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-sdzrr"] Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.368864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerStarted","Data":"c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4"} Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.414744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerStarted","Data":"0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3"} Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.415536 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:45 crc kubenswrapper[4739]: I0218 14:20:45.444604 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" podStartSLOduration=6.444584571 podStartE2EDuration="6.444584571s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:45.442895127 +0000 UTC m=+1277.938616049" watchObservedRunningTime="2026-02-18 14:20:45.444584571 +0000 UTC m=+1277.940305513" Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.426233 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" path="/var/lib/kubelet/pods/e6a350a1-b153-4edb-b937-ff7ccec8d1de/volumes" Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.428937 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerStarted","Data":"7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234"} Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.429115 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" containerID="cri-o://a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c" gracePeriod=30 Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.429224 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" containerID="cri-o://7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234" gracePeriod=30 Feb 18 14:20:46 crc kubenswrapper[4739]: I0218 14:20:46.467510 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.467491901 podStartE2EDuration="7.467491901s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:46.453690827 +0000 UTC m=+1278.949411839" watchObservedRunningTime="2026-02-18 14:20:46.467491901 +0000 UTC m=+1278.963212823" Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.451520 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerStarted","Data":"55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554"} Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.453605 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" containerID="cri-o://55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554" gracePeriod=30 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.453611 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" containerID="cri-o://c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4" gracePeriod=30 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455168 4739 generic.go:334] "Generic (PLEG): container finished" podID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerID="7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234" exitCode=143 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455196 4739 generic.go:334] "Generic (PLEG): container finished" podID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerID="a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c" exitCode=143 Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455212 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerDied","Data":"7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234"} Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.455231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerDied","Data":"a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c"} Feb 18 14:20:47 crc kubenswrapper[4739]: I0218 14:20:47.488422 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.488403842 podStartE2EDuration="8.488403842s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:20:47.474786342 +0000 UTC m=+1279.970507264" watchObservedRunningTime="2026-02-18 14:20:47.488403842 +0000 UTC m=+1279.984124764" Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.470817 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerID="55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554" exitCode=0 Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.471105 4739 generic.go:334] "Generic (PLEG): container finished" podID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerID="c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4" exitCode=143 Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.470917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerDied","Data":"55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554"} Feb 18 14:20:48 crc kubenswrapper[4739]: I0218 14:20:48.471143 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerDied","Data":"c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4"} Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.310627 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.371777 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.372030 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" containerID="cri-o://2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc" gracePeriod=10 Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.499478 4739 generic.go:334] "Generic (PLEG): container finished" podID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerID="0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40" exitCode=0 Feb 18 14:20:50 crc kubenswrapper[4739]: I0218 14:20:50.499716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerDied","Data":"0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40"} Feb 18 14:20:51 crc kubenswrapper[4739]: I0218 14:20:51.527705 4739 generic.go:334] "Generic (PLEG): container finished" podID="a95a3e0d-f263-464b-9406-0fc51724a068" containerID="2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc" exitCode=0 Feb 18 14:20:51 crc kubenswrapper[4739]: I0218 14:20:51.527769 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerDied","Data":"2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc"} Feb 18 14:20:52 crc kubenswrapper[4739]: I0218 14:20:52.317772 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Feb 18 14:20:57 crc kubenswrapper[4739]: I0218 14:20:57.318291 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.828736 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.947836 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.947994 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948037 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948065 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948260 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.948295 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") pod \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\" (UID: \"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7\") " Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.955243 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv" (OuterVolumeSpecName: "kube-api-access-h99sv") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "kube-api-access-h99sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.963882 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.964041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.964079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts" (OuterVolumeSpecName: "scripts") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.983493 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data" (OuterVolumeSpecName: "config-data") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:58 crc kubenswrapper[4739]: I0218 14:20:58.987987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" (UID: "0b2ffeaa-7f58-4b22-a50e-47a96502d0c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051229 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h99sv\" (UniqueName: \"kubernetes.io/projected/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-kube-api-access-h99sv\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051258 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051269 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051279 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051286 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.051304 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.372806 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.372877 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.373044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.374061 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.374132 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a" gracePeriod=600 Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.615278 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pffpk" event={"ID":"0b2ffeaa-7f58-4b22-a50e-47a96502d0c7","Type":"ContainerDied","Data":"70d11242c01619e7bdfd32d0a6252d06f3b61a6d441fcbc7ab28b9bd66c4286b"} Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.615316 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pffpk" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.615323 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d11242c01619e7bdfd32d0a6252d06f3b61a6d441fcbc7ab28b9bd66c4286b" Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.618514 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a" exitCode=0 Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.618554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a"} Feb 18 14:20:59 crc kubenswrapper[4739]: I0218 14:20:59.618590 4739 scope.go:117] "RemoveContainer" containerID="a6efc2e2824f0e8bfb870590257af439370630fe923098abd18f500360b6dbf0" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.021266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.033047 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pffpk"] Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.112513 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:21:00 crc kubenswrapper[4739]: E0218 14:21:00.113079 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerName="init" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113104 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerName="init" Feb 18 14:21:00 crc kubenswrapper[4739]: E0218 14:21:00.113119 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerName="keystone-bootstrap" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113128 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerName="keystone-bootstrap" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113354 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a350a1-b153-4edb-b937-ff7ccec8d1de" containerName="init" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.113393 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" containerName="keystone-bootstrap" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.114343 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.121262 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.121485 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.121603 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.122412 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.130618 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.136070 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291584 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291783 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291830 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291851 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.291921 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.393869 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.393945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.393980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.394111 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.394168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.394190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.403747 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.403865 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.404005 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.404199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.404199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.416866 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"keystone-bootstrap-42sfc\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.424786 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2ffeaa-7f58-4b22-a50e-47a96502d0c7" path="/var/lib/kubelet/pods/0b2ffeaa-7f58-4b22-a50e-47a96502d0c7/volumes" Feb 18 14:21:00 crc kubenswrapper[4739]: I0218 14:21:00.450428 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.318174 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: connect: connection refused" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.320923 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:21:02 crc kubenswrapper[4739]: E0218 14:21:02.719731 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 18 14:21:02 crc kubenswrapper[4739]: E0218 14:21:02.719978 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n688h79h8ch5d6h669h577hf9h5d5h89h666h664h548h589h659h66fh555hddh668h6h6ch5c7h687h5b8h55fhdbh7dh84hdbhc6h68bh5d9h6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngbgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e2a576aa-9125-4096-8ee5-ac83d6aaee01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.819205 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.949988 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950044 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950161 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950214 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950316 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.950415 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") pod \"dee39188-8dd1-45dd-afd8-ef4599d03adb\" (UID: \"dee39188-8dd1-45dd-afd8-ef4599d03adb\") " Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.952095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs" (OuterVolumeSpecName: "logs") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.952909 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.957968 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts" (OuterVolumeSpecName: "scripts") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.958022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm" (OuterVolumeSpecName: "kube-api-access-l8swm") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "kube-api-access-l8swm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:02 crc kubenswrapper[4739]: I0218 14:21:02.997984 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (OuterVolumeSpecName: "glance") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.006633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.055064 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068514 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068552 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068570 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068617 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" " Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068635 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dee39188-8dd1-45dd-afd8-ef4599d03adb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068648 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8swm\" (UniqueName: \"kubernetes.io/projected/dee39188-8dd1-45dd-afd8-ef4599d03adb-kube-api-access-l8swm\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.068667 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.076594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data" (OuterVolumeSpecName: "config-data") pod "dee39188-8dd1-45dd-afd8-ef4599d03adb" (UID: "dee39188-8dd1-45dd-afd8-ef4599d03adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.100640 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.100794 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15") on node "crc" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.170221 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.170257 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dee39188-8dd1-45dd-afd8-ef4599d03adb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.659428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"dee39188-8dd1-45dd-afd8-ef4599d03adb","Type":"ContainerDied","Data":"3de40032c9cfb4df3fb82bbfc31efd6983d0c4857cda5c9f3d8ac5118ab12bd7"} Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.659535 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.718997 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.741302 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.772552 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: E0218 14:21:03.773116 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773139 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" Feb 18 14:21:03 crc kubenswrapper[4739]: E0218 14:21:03.773177 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773187 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773691 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-log" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.773718 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" containerName="glance-httpd" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.775573 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.777609 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.780127 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.784681 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896663 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896738 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896761 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.896916 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.897073 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.897415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999464 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999654 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999672 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999706 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:03 crc kubenswrapper[4739]: I0218 14:21:03.999755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.001723 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.001911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.006678 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.006685 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.008345 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.008381 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bd6abac90ebac69ac03837941e4aa1820f14a49ea1b1fe31e1dd216b0487447/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.012591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.018263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.025198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.058045 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.108802 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:04 crc kubenswrapper[4739]: I0218 14:21:04.424638 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee39188-8dd1-45dd-afd8-ef4599d03adb" path="/var/lib/kubelet/pods/dee39188-8dd1-45dd-afd8-ef4599d03adb/volumes" Feb 18 14:21:10 crc kubenswrapper[4739]: I0218 14:21:10.985844 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:10 crc kubenswrapper[4739]: I0218 14:21:10.986346 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.050250 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.050752 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wgcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-2dhxm_openstack(3edd4390-e376-469a-b7c5-9bd7bf9dd100): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.052041 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-2dhxm" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.564882 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.565238 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7wlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h5s86_openstack(a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.567366 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h5s86" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.675564 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.686399 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.743039 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" event={"ID":"a95a3e0d-f263-464b-9406-0fc51724a068","Type":"ContainerDied","Data":"e8e67403108bde3a436c81c4b7ef9a41f1b4af29116b93e8959bf7b75aa603d8"} Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.743125 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.746061 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.746496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e9d24b5d-3b30-41c2-b736-7a98e88e1da4","Type":"ContainerDied","Data":"d7195d297c9d5141a71387652075a97edc794fb733f7afeadd4dd323957a1f63"} Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.748433 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-2dhxm" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" Feb 18 14:21:11 crc kubenswrapper[4739]: E0218 14:21:11.748474 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h5s86" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790093 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790230 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790425 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790525 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790736 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790783 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790809 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790844 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790885 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790916 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") pod \"a95a3e0d-f263-464b-9406-0fc51724a068\" (UID: \"a95a3e0d-f263-464b-9406-0fc51724a068\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.790938 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") pod \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\" (UID: \"e9d24b5d-3b30-41c2-b736-7a98e88e1da4\") " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791059 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs" (OuterVolumeSpecName: "logs") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791719 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.791749 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.798826 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts" (OuterVolumeSpecName: "scripts") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.810871 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc" (OuterVolumeSpecName: "kube-api-access-bkcvc") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "kube-api-access-bkcvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.814381 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6" (OuterVolumeSpecName: "kube-api-access-9zgw6") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "kube-api-access-9zgw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.815845 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (OuterVolumeSpecName: "glance") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.851132 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.857915 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.861159 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.867193 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.885208 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data" (OuterVolumeSpecName: "config-data") pod "e9d24b5d-3b30-41c2-b736-7a98e88e1da4" (UID: "e9d24b5d-3b30-41c2-b736-7a98e88e1da4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.886672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894010 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894045 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894060 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894088 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" " Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894101 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894113 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zgw6\" (UniqueName: \"kubernetes.io/projected/a95a3e0d-f263-464b-9406-0fc51724a068-kube-api-access-9zgw6\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894138 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkcvc\" (UniqueName: \"kubernetes.io/projected/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-kube-api-access-bkcvc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894150 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.894161 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24b5d-3b30-41c2-b736-7a98e88e1da4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.903114 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.909634 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config" (OuterVolumeSpecName: "config") pod "a95a3e0d-f263-464b-9406-0fc51724a068" (UID: "a95a3e0d-f263-464b-9406-0fc51724a068"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.944095 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.944259 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b") on node "crc" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.995957 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.995993 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:11 crc kubenswrapper[4739]: I0218 14:21:11.996008 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a95a3e0d-f263-464b-9406-0fc51724a068-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.112554 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.121935 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-lc9pz"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.142631 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.174043 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.185757 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186261 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186281 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186315 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="init" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186322 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="init" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186330 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186336 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.186354 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186360 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186558 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186581 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-log" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.186598 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" containerName="glance-httpd" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.191371 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.193768 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.193869 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.197873 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304344 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304580 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304676 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304752 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.304882 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.318474 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-lc9pz" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: i/o timeout" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.406804 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.406859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.406953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407004 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407025 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407299 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.407586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.408133 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.409745 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.409791 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f742b1b3d6273dd3375e0e5a76a4c01f047ef0c4f7f8765a09ef674c2c3b6349/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.411006 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.412815 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.413154 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.424556 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.430345 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.451954 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a95a3e0d-f263-464b-9406-0fc51724a068" path="/var/lib/kubelet/pods/a95a3e0d-f263-464b-9406-0fc51724a068/volumes" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.452953 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d24b5d-3b30-41c2-b736-7a98e88e1da4" path="/var/lib/kubelet/pods/e9d24b5d-3b30-41c2-b736-7a98e88e1da4/volumes" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.462060 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.516583 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:21:12 crc kubenswrapper[4739]: I0218 14:21:12.958589 4739 scope.go:117] "RemoveContainer" containerID="7849d496b346d76e556cffbb4d826b3d41a907f7ef452783e6466378fd4c5234" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.961653 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.961800 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh97j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-hm27f_openstack(51d77527-a940-4423-ac63-4a7cdf366510): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:21:12 crc kubenswrapper[4739]: E0218 14:21:12.963119 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-hm27f" podUID="51d77527-a940-4423-ac63-4a7cdf366510" Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.406485 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.588597 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:21:13 crc kubenswrapper[4739]: W0218 14:21:13.617795 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c42d996_bf46_4e69_892f_c720a9bce282.slice/crio-13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f WatchSource:0}: Error finding container 13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f: Status 404 returned error can't find the container with id 13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f Feb 18 14:21:13 crc kubenswrapper[4739]: W0218 14:21:13.624920 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3677acc3_fd05_4d33_ac6c_aa420ecce125.slice/crio-3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6 WatchSource:0}: Error finding container 3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6: Status 404 returned error can't find the container with id 3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6 Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.641235 4739 scope.go:117] "RemoveContainer" containerID="a44a8ff33136a79d160b7594ff4f4cc994f66dd03004902c8c1353bd8c3ef53c" Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.783147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerStarted","Data":"13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f"} Feb 18 14:21:13 crc kubenswrapper[4739]: I0218 14:21:13.796784 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerStarted","Data":"3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6"} Feb 18 14:21:13 crc kubenswrapper[4739]: E0218 14:21:13.803044 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-hm27f" podUID="51d77527-a940-4423-ac63-4a7cdf366510" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.073357 4739 scope.go:117] "RemoveContainer" containerID="2ba789c14a907f042da88ae951cbe7458905348d9982d8330fe417e5b45cd9fc" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.109147 4739 scope.go:117] "RemoveContainer" containerID="521ee440b42cc6ac855fe6f696353905b77bad514b6fa532070f2cedd7a11e27" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.174761 4739 scope.go:117] "RemoveContainer" containerID="55ab75468df7ce6273a9b4a49377e4389940f83c3a676618a01a66897198c554" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.223723 4739 scope.go:117] "RemoveContainer" containerID="c5957e0cde43838579939aa30bcc7ed4defe06badb42b7084617cf8db85e67b4" Feb 18 14:21:14 crc kubenswrapper[4739]: W0218 14:21:14.230094 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5b6ca41_d34e_4ef9_b04c_4de7a50b71ad.slice/crio-a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14 WatchSource:0}: Error finding container a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14: Status 404 returned error can't find the container with id a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14 Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.232946 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.814061 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerStarted","Data":"331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.819928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.825873 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerStarted","Data":"a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.837217 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.843899 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerStarted","Data":"d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.845241 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-42sfc" podStartSLOduration=14.845219109 podStartE2EDuration="14.845219109s" podCreationTimestamp="2026-02-18 14:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:14.837581835 +0000 UTC m=+1307.333302757" watchObservedRunningTime="2026-02-18 14:21:14.845219109 +0000 UTC m=+1307.340940041" Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.847765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerStarted","Data":"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768"} Feb 18 14:21:14 crc kubenswrapper[4739]: I0218 14:21:14.880395 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-q58nf" podStartSLOduration=5.930040861 podStartE2EDuration="35.880376155s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:41.600630699 +0000 UTC m=+1274.096351631" lastFinishedPulling="2026-02-18 14:21:11.550966003 +0000 UTC m=+1304.046686925" observedRunningTime="2026-02-18 14:21:14.87585355 +0000 UTC m=+1307.371574472" watchObservedRunningTime="2026-02-18 14:21:14.880376155 +0000 UTC m=+1307.376097077" Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.889903 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerStarted","Data":"c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96"} Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.890539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerStarted","Data":"2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999"} Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.894871 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerStarted","Data":"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59"} Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.940780 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.940750163 podStartE2EDuration="3.940750163s" podCreationTimestamp="2026-02-18 14:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:15.917914652 +0000 UTC m=+1308.413635614" watchObservedRunningTime="2026-02-18 14:21:15.940750163 +0000 UTC m=+1308.436471095" Feb 18 14:21:15 crc kubenswrapper[4739]: I0218 14:21:15.982934 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.982864296 podStartE2EDuration="12.982864296s" podCreationTimestamp="2026-02-18 14:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:15.957714906 +0000 UTC m=+1308.453435848" watchObservedRunningTime="2026-02-18 14:21:15.982864296 +0000 UTC m=+1308.478585228" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.517162 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.517644 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.602679 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.603344 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.981400 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:22 crc kubenswrapper[4739]: I0218 14:21:22.982229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.109312 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.109411 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.150323 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:24 crc kubenswrapper[4739]: I0218 14:21:24.162910 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003067 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003420 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003467 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:25 crc kubenswrapper[4739]: I0218 14:21:25.003430 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.033095 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.035031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerStarted","Data":"d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.037856 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerStarted","Data":"cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.046599 4739 generic.go:334] "Generic (PLEG): container finished" podID="0c42d996-bf46-4e69-892f-c720a9bce282" containerID="331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385" exitCode=0 Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.046712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerDied","Data":"331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385"} Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.061316 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h5s86" podStartSLOduration=3.983007367 podStartE2EDuration="49.061294947s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:42.095283178 +0000 UTC m=+1274.591004100" lastFinishedPulling="2026-02-18 14:21:27.173570748 +0000 UTC m=+1319.669291680" observedRunningTime="2026-02-18 14:21:28.048348007 +0000 UTC m=+1320.544068939" watchObservedRunningTime="2026-02-18 14:21:28.061294947 +0000 UTC m=+1320.557015869" Feb 18 14:21:28 crc kubenswrapper[4739]: I0218 14:21:28.097001 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-2dhxm" podStartSLOduration=3.671636629 podStartE2EDuration="49.096979246s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:41.43577806 +0000 UTC m=+1273.931498992" lastFinishedPulling="2026-02-18 14:21:26.861120687 +0000 UTC m=+1319.356841609" observedRunningTime="2026-02-18 14:21:28.082923208 +0000 UTC m=+1320.578644130" watchObservedRunningTime="2026-02-18 14:21:28.096979246 +0000 UTC m=+1320.592700188" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.069140 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerStarted","Data":"13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2"} Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.108607 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-hm27f" podStartSLOduration=4.286423138 podStartE2EDuration="50.1085633s" podCreationTimestamp="2026-02-18 14:20:39 +0000 UTC" firstStartedPulling="2026-02-18 14:20:41.349048685 +0000 UTC m=+1273.844769607" lastFinishedPulling="2026-02-18 14:21:27.171188847 +0000 UTC m=+1319.666909769" observedRunningTime="2026-02-18 14:21:29.093103436 +0000 UTC m=+1321.588824368" watchObservedRunningTime="2026-02-18 14:21:29.1085633 +0000 UTC m=+1321.604284222" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.617378 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.779957 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780008 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780338 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.780476 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") pod \"0c42d996-bf46-4e69-892f-c720a9bce282\" (UID: \"0c42d996-bf46-4e69-892f-c720a9bce282\") " Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.786786 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts" (OuterVolumeSpecName: "scripts") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.788608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.788748 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82" (OuterVolumeSpecName: "kube-api-access-55f82") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "kube-api-access-55f82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.790647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.830144 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data" (OuterVolumeSpecName: "config-data") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884293 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884330 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55f82\" (UniqueName: \"kubernetes.io/projected/0c42d996-bf46-4e69-892f-c720a9bce282-kube-api-access-55f82\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884343 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884356 4739 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.884367 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.890133 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c42d996-bf46-4e69-892f-c720a9bce282" (UID: "0c42d996-bf46-4e69-892f-c720a9bce282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:29 crc kubenswrapper[4739]: I0218 14:21:29.986524 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c42d996-bf46-4e69-892f-c720a9bce282-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.085236 4739 generic.go:334] "Generic (PLEG): container finished" podID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerID="d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489" exitCode=0 Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.085316 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerDied","Data":"d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489"} Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.089900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-42sfc" event={"ID":"0c42d996-bf46-4e69-892f-c720a9bce282","Type":"ContainerDied","Data":"13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f"} Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.089940 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13331583df51846abfa6c91893bf2ea8b25631899b499511348e016cb712ca0f" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.090000 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-42sfc" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.213344 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7dff988c46-72t9g"] Feb 18 14:21:30 crc kubenswrapper[4739]: E0218 14:21:30.213877 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" containerName="keystone-bootstrap" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.213902 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" containerName="keystone-bootstrap" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.214176 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" containerName="keystone-bootstrap" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.215260 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.217950 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218369 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5fzf8" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218683 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218812 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.218983 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.228695 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.241135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7dff988c46-72t9g"] Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.393953 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz6x9\" (UniqueName: \"kubernetes.io/projected/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-kube-api-access-sz6x9\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394019 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-fernet-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-internal-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-config-data\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-scripts\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394718 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-credential-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-combined-ca-bundle\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.394798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-public-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.473141 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.473644 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.478368 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.478522 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.480652 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498147 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-combined-ca-bundle\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498233 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-public-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6x9\" (UniqueName: \"kubernetes.io/projected/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-kube-api-access-sz6x9\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498290 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-fernet-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-internal-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-config-data\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498530 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-scripts\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.498599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-credential-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.502547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.508321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-scripts\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.508461 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-public-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.508520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-fernet-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.521590 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-credential-keys\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.522667 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-combined-ca-bundle\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.523432 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-internal-tls-certs\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.525620 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-config-data\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.538301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz6x9\" (UniqueName: \"kubernetes.io/projected/74cf9632-a7c0-4b6e-98ce-ebd6411a6594-kube-api-access-sz6x9\") pod \"keystone-7dff988c46-72t9g\" (UID: \"74cf9632-a7c0-4b6e-98ce-ebd6411a6594\") " pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:30 crc kubenswrapper[4739]: I0218 14:21:30.542864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.072862 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7dff988c46-72t9g"] Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.108387 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7dff988c46-72t9g" event={"ID":"74cf9632-a7c0-4b6e-98ce-ebd6411a6594","Type":"ContainerStarted","Data":"b98cb9aafff0356094b6f04f8e15d578115ab86d26d1c69d5d1753220bf423e1"} Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.450091 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533235 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533374 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533637 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.533696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") pod \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\" (UID: \"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc\") " Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.536523 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs" (OuterVolumeSpecName: "logs") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.542006 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts" (OuterVolumeSpecName: "scripts") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.547430 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg" (OuterVolumeSpecName: "kube-api-access-d67kg") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "kube-api-access-d67kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.584130 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data" (OuterVolumeSpecName: "config-data") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.598841 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" (UID: "f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636364 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636411 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636423 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636434 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d67kg\" (UniqueName: \"kubernetes.io/projected/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-kube-api-access-d67kg\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:31 crc kubenswrapper[4739]: I0218 14:21:31.636466 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.126847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7dff988c46-72t9g" event={"ID":"74cf9632-a7c0-4b6e-98ce-ebd6411a6594","Type":"ContainerStarted","Data":"e5c1f3bf17d3400a13171c975f2d5f673fb911dbbe512cb159d24351431b4c93"} Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.126920 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.134665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-q58nf" event={"ID":"f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc","Type":"ContainerDied","Data":"1870c4359d29029459a4d3730dceade0333f6df6959a787f14729f3d6e56a8fd"} Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.134719 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1870c4359d29029459a4d3730dceade0333f6df6959a787f14729f3d6e56a8fd" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.134724 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-q58nf" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.174155 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7dff988c46-72t9g" podStartSLOduration=2.174137129 podStartE2EDuration="2.174137129s" podCreationTimestamp="2026-02-18 14:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:32.148066275 +0000 UTC m=+1324.643787197" watchObservedRunningTime="2026-02-18 14:21:32.174137129 +0000 UTC m=+1324.669858051" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.370400 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65fbfb5b48-rchlc"] Feb 18 14:21:32 crc kubenswrapper[4739]: E0218 14:21:32.371123 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerName="placement-db-sync" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.371150 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerName="placement-db-sync" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.371480 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" containerName="placement-db-sync" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.372944 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375302 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375542 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-f4jrj" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375688 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.375741 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.403467 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65fbfb5b48-rchlc"] Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.459571 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38710bdf-e679-45f4-b3a6-597a3b1cb186-logs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460028 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-config-data\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460245 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-internal-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460281 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn54q\" (UniqueName: \"kubernetes.io/projected/38710bdf-e679-45f4-b3a6-597a3b1cb186-kube-api-access-nn54q\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-combined-ca-bundle\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460398 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-scripts\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.460463 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-public-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562606 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-internal-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562697 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn54q\" (UniqueName: \"kubernetes.io/projected/38710bdf-e679-45f4-b3a6-597a3b1cb186-kube-api-access-nn54q\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562735 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-combined-ca-bundle\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562837 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-scripts\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562904 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-public-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.562983 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38710bdf-e679-45f4-b3a6-597a3b1cb186-logs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.563057 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-config-data\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.563723 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38710bdf-e679-45f4-b3a6-597a3b1cb186-logs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.576017 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-internal-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.578180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-combined-ca-bundle\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.580668 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn54q\" (UniqueName: \"kubernetes.io/projected/38710bdf-e679-45f4-b3a6-597a3b1cb186-kube-api-access-nn54q\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.831376 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-public-tls-certs\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.837564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-scripts\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:32 crc kubenswrapper[4739]: I0218 14:21:32.837978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38710bdf-e679-45f4-b3a6-597a3b1cb186-config-data\") pod \"placement-65fbfb5b48-rchlc\" (UID: \"38710bdf-e679-45f4-b3a6-597a3b1cb186\") " pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:33 crc kubenswrapper[4739]: I0218 14:21:33.005871 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:37 crc kubenswrapper[4739]: I0218 14:21:37.213560 4739 generic.go:334] "Generic (PLEG): container finished" podID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerID="d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4" exitCode=0 Feb 18 14:21:37 crc kubenswrapper[4739]: I0218 14:21:37.213694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerDied","Data":"d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4"} Feb 18 14:21:37 crc kubenswrapper[4739]: I0218 14:21:37.417277 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65fbfb5b48-rchlc"] Feb 18 14:21:37 crc kubenswrapper[4739]: E0218 14:21:37.428954 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.226531 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerStarted","Data":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.226727 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" containerID="cri-o://c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" gracePeriod=30 Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.227048 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.227390 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" containerID="cri-o://77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" gracePeriod=30 Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.227454 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" containerID="cri-o://709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" gracePeriod=30 Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230191 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65fbfb5b48-rchlc" event={"ID":"38710bdf-e679-45f4-b3a6-597a3b1cb186","Type":"ContainerStarted","Data":"5b6162e9273de8f9cb959ff7ffa10674372c041cc24173e5449c3947335f5a9f"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230219 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65fbfb5b48-rchlc" event={"ID":"38710bdf-e679-45f4-b3a6-597a3b1cb186","Type":"ContainerStarted","Data":"e138c9fb56a1b2659323325dd24bef442707b7c5c27da58fb1ff15c79ac1c701"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230237 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65fbfb5b48-rchlc" event={"ID":"38710bdf-e679-45f4-b3a6-597a3b1cb186","Type":"ContainerStarted","Data":"870b5536b541d29a1685d8df33006e3196991a5a62e728ad1a4c16a4398901aa"} Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.230498 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.284908 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65fbfb5b48-rchlc" podStartSLOduration=6.28489074 podStartE2EDuration="6.28489074s" podCreationTimestamp="2026-02-18 14:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:38.269389845 +0000 UTC m=+1330.765110777" watchObservedRunningTime="2026-02-18 14:21:38.28489074 +0000 UTC m=+1330.780611662" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.646222 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.714123 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") pod \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.714506 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") pod \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.714610 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") pod \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\" (UID: \"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8\") " Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.718992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" (UID: "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.719020 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp" (OuterVolumeSpecName: "kube-api-access-s7wlp") pod "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" (UID: "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8"). InnerVolumeSpecName "kube-api-access-s7wlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.747606 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" (UID: "a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.817136 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.817163 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7wlp\" (UniqueName: \"kubernetes.io/projected/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-kube-api-access-s7wlp\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:38 crc kubenswrapper[4739]: I0218 14:21:38.817174 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.101137 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.124961 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.125345 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.133766 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts" (OuterVolumeSpecName: "scripts") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.133911 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv" (OuterVolumeSpecName: "kube-api-access-ngbgv") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "kube-api-access-ngbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227061 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227180 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227251 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227312 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") pod \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\" (UID: \"e2a576aa-9125-4096-8ee5-ac83d6aaee01\") " Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227927 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngbgv\" (UniqueName: \"kubernetes.io/projected/e2a576aa-9125-4096-8ee5-ac83d6aaee01-kube-api-access-ngbgv\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.227945 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.228870 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.229138 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242061 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" exitCode=0 Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242128 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" exitCode=2 Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242145 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" exitCode=0 Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242216 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242284 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2a576aa-9125-4096-8ee5-ac83d6aaee01","Type":"ContainerDied","Data":"012dc8f477dfe3bd25f7fe5decf6c00cb3c850250a18972e074f41544b597e70"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242334 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.242511 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.256465 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h5s86" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.257748 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h5s86" event={"ID":"a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8","Type":"ContainerDied","Data":"d2307342ad946d88b327f9c4998f5fef25fdf0715d6dc8137505b684ccb0bf1f"} Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.260493 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2307342ad946d88b327f9c4998f5fef25fdf0715d6dc8137505b684ccb0bf1f" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.270068 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.311705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.324399 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data" (OuterVolumeSpecName: "config-data") pod "e2a576aa-9125-4096-8ee5-ac83d6aaee01" (UID: "e2a576aa-9125-4096-8ee5-ac83d6aaee01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330546 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330585 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330598 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330611 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a576aa-9125-4096-8ee5-ac83d6aaee01-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.330622 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a576aa-9125-4096-8ee5-ac83d6aaee01-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.334660 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.357726 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.455815 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.457227 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457277 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} err="failed to get container status \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457302 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.457782 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457816 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} err="failed to get container status \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.457836 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.458032 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458056 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} err="failed to get container status \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458069 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458220 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} err="failed to get container status \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458236 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458515 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} err="failed to get container status \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.458543 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.459063 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} err="failed to get container status \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.459085 4739 scope.go:117] "RemoveContainer" containerID="77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462475 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82"} err="failed to get container status \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": rpc error: code = NotFound desc = could not find container \"77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82\": container with ID starting with 77da7d3cada2f212910b224af4d8be44e3848e5d9ba7c80db1d7de68ad080b82 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462510 4739 scope.go:117] "RemoveContainer" containerID="709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462880 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3"} err="failed to get container status \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": rpc error: code = NotFound desc = could not find container \"709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3\": container with ID starting with 709c7ef8378f1061c5ee71691c3ab678c662b0dde8d59266dd3164eb2d79eed3 not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.462901 4739 scope.go:117] "RemoveContainer" containerID="c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.463089 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff"} err="failed to get container status \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": rpc error: code = NotFound desc = could not find container \"c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff\": container with ID starting with c2e30e9e2d9c4c9a7b3f6076c24682ef8515165aa9eb91437a926b64b36f61ff not found: ID does not exist" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.517542 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-765d88ff9c-smd7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518135 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518156 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518191 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerName="barbican-db-sync" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518199 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerName="barbican-db-sync" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518217 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518225 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" Feb 18 14:21:39 crc kubenswrapper[4739]: E0218 14:21:39.518251 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518259 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518467 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="ceilometer-notification-agent" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518503 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="proxy-httpd" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518516 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" containerName="barbican-db-sync" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.518527 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" containerName="sg-core" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.519666 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.523463 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.523813 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.523935 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xnq4d" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.532002 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-575dbd86bd-gjcs6"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.533764 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.536680 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.618539 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575dbd86bd-gjcs6"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data-custom\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data-custom\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646938 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njfq9\" (UniqueName: \"kubernetes.io/projected/8f41089a-bbe1-4371-9a89-38423dca256c-kube-api-access-njfq9\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.646977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq6pf\" (UniqueName: \"kubernetes.io/projected/53848a1c-a5c5-4948-a45f-2ba01bc166ca-kube-api-access-pq6pf\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647031 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-combined-ca-bundle\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647365 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f41089a-bbe1-4371-9a89-38423dca256c-logs\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647520 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53848a1c-a5c5-4948-a45f-2ba01bc166ca-logs\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.647593 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-combined-ca-bundle\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.676239 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765d88ff9c-smd7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750819 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f41089a-bbe1-4371-9a89-38423dca256c-logs\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750925 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53848a1c-a5c5-4948-a45f-2ba01bc166ca-logs\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.750976 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-combined-ca-bundle\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data-custom\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data-custom\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751190 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njfq9\" (UniqueName: \"kubernetes.io/projected/8f41089a-bbe1-4371-9a89-38423dca256c-kube-api-access-njfq9\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751272 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq6pf\" (UniqueName: \"kubernetes.io/projected/53848a1c-a5c5-4948-a45f-2ba01bc166ca-kube-api-access-pq6pf\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751301 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-combined-ca-bundle\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751586 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53848a1c-a5c5-4948-a45f-2ba01bc166ca-logs\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.751902 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f41089a-bbe1-4371-9a89-38423dca256c-logs\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.757059 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data-custom\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.759591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-combined-ca-bundle\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.768828 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.770515 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-combined-ca-bundle\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.770768 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f41089a-bbe1-4371-9a89-38423dca256c-config-data\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.771979 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.774013 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data-custom\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.777147 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53848a1c-a5c5-4948-a45f-2ba01bc166ca-config-data\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.795610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq6pf\" (UniqueName: \"kubernetes.io/projected/53848a1c-a5c5-4948-a45f-2ba01bc166ca-kube-api-access-pq6pf\") pod \"barbican-worker-765d88ff9c-smd7n\" (UID: \"53848a1c-a5c5-4948-a45f-2ba01bc166ca\") " pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.796666 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njfq9\" (UniqueName: \"kubernetes.io/projected/8f41089a-bbe1-4371-9a89-38423dca256c-kube-api-access-njfq9\") pod \"barbican-keystone-listener-575dbd86bd-gjcs6\" (UID: \"8f41089a-bbe1-4371-9a89-38423dca256c\") " pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.834783 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.852875 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.852935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.852968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.853080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.853209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.853239 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.867045 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-765d88ff9c-smd7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.896279 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.896843 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.908993 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.917833 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.921079 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.924851 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.924849 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.931642 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.955761 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.955845 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.956317 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.964971 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.967725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.969258 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.973551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.974349 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.976566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.976694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.979074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.987376 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:39 crc kubenswrapper[4739]: I0218 14:21:39.987618 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"dnsmasq-dns-7c67bffd47-6wx56\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.058887 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059298 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059344 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059375 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059399 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059417 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059490 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059517 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.059806 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.098276 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.105433 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:40 crc kubenswrapper[4739]: E0218 14:21:40.106332 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-h2l2h log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="89478c1f-2d02-4e05-ab0b-e257a0dc3d08" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167687 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167757 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167788 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167813 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167827 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167885 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167908 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.167983 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168122 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168182 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168247 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.168375 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.170134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.170716 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.181869 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.225346 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.225752 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.225944 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.226036 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.226607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.227094 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.228566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.233671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"barbican-api-b4b66db68-ntx7n\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.244365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"ceilometer-0\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.295098 4739 generic.go:334] "Generic (PLEG): container finished" podID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerID="cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04" exitCode=0 Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.295297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerDied","Data":"cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04"} Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.302634 4739 generic.go:334] "Generic (PLEG): container finished" podID="b3697715-3f94-4086-99ab-65a492bd7542" containerID="615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5" exitCode=0 Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.302707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerDied","Data":"615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5"} Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.308294 4739 generic.go:334] "Generic (PLEG): container finished" podID="51d77527-a940-4423-ac63-4a7cdf366510" containerID="13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2" exitCode=0 Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.308408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerDied","Data":"13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2"} Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.310527 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.361879 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.425464 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a576aa-9125-4096-8ee5-ac83d6aaee01" path="/var/lib/kubelet/pods/e2a576aa-9125-4096-8ee5-ac83d6aaee01/volumes" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.476639 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.485622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575dbd86bd-gjcs6"] Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486536 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486593 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486631 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486744 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486795 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486875 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.486936 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") pod \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\" (UID: \"89478c1f-2d02-4e05-ab0b-e257a0dc3d08\") " Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.487616 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.495426 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.505720 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.509426 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.509489 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.509504 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.511822 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h" (OuterVolumeSpecName: "kube-api-access-h2l2h") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "kube-api-access-h2l2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.513969 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts" (OuterVolumeSpecName: "scripts") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.514079 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.519613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data" (OuterVolumeSpecName: "config-data") pod "89478c1f-2d02-4e05-ab0b-e257a0dc3d08" (UID: "89478c1f-2d02-4e05-ab0b-e257a0dc3d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.611735 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.614488 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.614514 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2l2h\" (UniqueName: \"kubernetes.io/projected/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-kube-api-access-h2l2h\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.614532 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89478c1f-2d02-4e05-ab0b-e257a0dc3d08-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.650724 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-765d88ff9c-smd7n"] Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.800629 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:40 crc kubenswrapper[4739]: W0218 14:21:40.804302 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e15713_5e2e_4ede_9a0a_231e49dc0deb.slice/crio-ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b WatchSource:0}: Error finding container ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b: Status 404 returned error can't find the container with id ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b Feb 18 14:21:40 crc kubenswrapper[4739]: I0218 14:21:40.979644 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.321564 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765d88ff9c-smd7n" event={"ID":"53848a1c-a5c5-4948-a45f-2ba01bc166ca","Type":"ContainerStarted","Data":"27fbf6ed19e0af0f8d03849dc013e6a9d725589e5f8a24b612618b6cde8be6d0"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.322792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" event={"ID":"8f41089a-bbe1-4371-9a89-38423dca256c","Type":"ContainerStarted","Data":"72a6cc41691def01dbec482f45d1d622290d61beb07ccae66f71bf732054c3a4"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.324334 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerStarted","Data":"7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.324364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerStarted","Data":"e6e7dfb42369260f31fbf7b2c8b3ddee88d4d1f06f45a187f08b311b7e5a41ef"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.325831 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" exitCode=0 Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.325938 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.326487 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerDied","Data":"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.326567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerStarted","Data":"ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b"} Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.591497 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.611629 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.623926 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.630108 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.634098 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.634264 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.635263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761391 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761530 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761572 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761712 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.761768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864125 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864232 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864265 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864422 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864481 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.864552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.865034 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.865535 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.874145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.874489 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.876805 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.877343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.886816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"ceilometer-0\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " pod="openstack/ceilometer-0" Feb 18 14:21:41 crc kubenswrapper[4739]: I0218 14:21:41.961243 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.342991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hm27f" event={"ID":"51d77527-a940-4423-ac63-4a7cdf366510","Type":"ContainerDied","Data":"b800d2e5f20a2d68b8e0f58bfc2fa70fc222830a78f8d8d41068e13af2965ba2"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.343289 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b800d2e5f20a2d68b8e0f58bfc2fa70fc222830a78f8d8d41068e13af2965ba2" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.345022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerStarted","Data":"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.345169 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.348593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerStarted","Data":"2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.348696 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.348728 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.352432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hc8hk" event={"ID":"b3697715-3f94-4086-99ab-65a492bd7542","Type":"ContainerDied","Data":"7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404"} Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.352515 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7acef4fd8413ff750142ee237ef31a3901dacad49674c51eb84a96f1a5fb1404" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.369781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" podStartSLOduration=3.369761921 podStartE2EDuration="3.369761921s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:42.363814989 +0000 UTC m=+1334.859535921" watchObservedRunningTime="2026-02-18 14:21:42.369761921 +0000 UTC m=+1334.865482853" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.389306 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b4b66db68-ntx7n" podStartSLOduration=3.389283558 podStartE2EDuration="3.389283558s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:42.382874255 +0000 UTC m=+1334.878595177" watchObservedRunningTime="2026-02-18 14:21:42.389283558 +0000 UTC m=+1334.885004480" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.391058 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.402130 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.442296 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89478c1f-2d02-4e05-ab0b-e257a0dc3d08" path="/var/lib/kubelet/pods/89478c1f-2d02-4e05-ab0b-e257a0dc3d08/volumes" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482679 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482706 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482734 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482812 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.482900 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.483054 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") pod \"51d77527-a940-4423-ac63-4a7cdf366510\" (UID: \"51d77527-a940-4423-ac63-4a7cdf366510\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.483790 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51d77527-a940-4423-ac63-4a7cdf366510-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.498389 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.501840 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts" (OuterVolumeSpecName: "scripts") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.508303 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j" (OuterVolumeSpecName: "kube-api-access-vh97j") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "kube-api-access-vh97j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.542744 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.586168 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") pod \"b3697715-3f94-4086-99ab-65a492bd7542\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.586319 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") pod \"b3697715-3f94-4086-99ab-65a492bd7542\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.586343 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") pod \"b3697715-3f94-4086-99ab-65a492bd7542\" (UID: \"b3697715-3f94-4086-99ab-65a492bd7542\") " Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587214 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587241 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh97j\" (UniqueName: \"kubernetes.io/projected/51d77527-a940-4423-ac63-4a7cdf366510-kube-api-access-vh97j\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587257 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.587269 4739 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.588857 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data" (OuterVolumeSpecName: "config-data") pod "51d77527-a940-4423-ac63-4a7cdf366510" (UID: "51d77527-a940-4423-ac63-4a7cdf366510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.595007 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr" (OuterVolumeSpecName: "kube-api-access-vw7wr") pod "b3697715-3f94-4086-99ab-65a492bd7542" (UID: "b3697715-3f94-4086-99ab-65a492bd7542"). InnerVolumeSpecName "kube-api-access-vw7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.649073 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config" (OuterVolumeSpecName: "config") pod "b3697715-3f94-4086-99ab-65a492bd7542" (UID: "b3697715-3f94-4086-99ab-65a492bd7542"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.650588 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3697715-3f94-4086-99ab-65a492bd7542" (UID: "b3697715-3f94-4086-99ab-65a492bd7542"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689297 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw7wr\" (UniqueName: \"kubernetes.io/projected/b3697715-3f94-4086-99ab-65a492bd7542-kube-api-access-vw7wr\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689335 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689349 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3697715-3f94-4086-99ab-65a492bd7542-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:42 crc kubenswrapper[4739]: I0218 14:21:42.689357 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d77527-a940-4423-ac63-4a7cdf366510-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.343913 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.379269 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fccfc9568-dvccq"] Feb 18 14:21:43 crc kubenswrapper[4739]: E0218 14:21:43.380871 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerName="heat-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.380897 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerName="heat-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: E0218 14:21:43.380910 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3697715-3f94-4086-99ab-65a492bd7542" containerName="neutron-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.380917 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3697715-3f94-4086-99ab-65a492bd7542" containerName="neutron-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: E0218 14:21:43.380951 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d77527-a940-4423-ac63-4a7cdf366510" containerName="cinder-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.380960 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d77527-a940-4423-ac63-4a7cdf366510" containerName="cinder-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.381202 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3697715-3f94-4086-99ab-65a492bd7542" containerName="neutron-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.381238 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" containerName="heat-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.381254 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d77527-a940-4423-ac63-4a7cdf366510" containerName="cinder-db-sync" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.385577 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hm27f" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.387770 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2dhxm" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.387922 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hc8hk" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.390016 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2dhxm" event={"ID":"3edd4390-e376-469a-b7c5-9bd7bf9dd100","Type":"ContainerDied","Data":"ab3a872330660cb89409af9b912cee12aa6ccbf272a46a86fd90d8fd6dc9f4c2"} Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.390076 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3a872330660cb89409af9b912cee12aa6ccbf272a46a86fd90d8fd6dc9f4c2" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.390170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.397967 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.398160 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.478898 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fccfc9568-dvccq"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.574199 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") pod \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.574393 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") pod \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.574602 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") pod \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\" (UID: \"3edd4390-e376-469a-b7c5-9bd7bf9dd100\") " Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.576682 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca969df-0549-4d07-ada4-2e0515419a1d-logs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.577204 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-public-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.577259 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-internal-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.580488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-combined-ca-bundle\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.580963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.581043 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9hw6\" (UniqueName: \"kubernetes.io/projected/aca969df-0549-4d07-ada4-2e0515419a1d-kube-api-access-m9hw6\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.581187 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data-custom\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.581633 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv" (OuterVolumeSpecName: "kube-api-access-6wgcv") pod "3edd4390-e376-469a-b7c5-9bd7bf9dd100" (UID: "3edd4390-e376-469a-b7c5-9bd7bf9dd100"). InnerVolumeSpecName "kube-api-access-6wgcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.646869 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.667315 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3edd4390-e376-469a-b7c5-9bd7bf9dd100" (UID: "3edd4390-e376-469a-b7c5-9bd7bf9dd100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.684705 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686084 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data-custom\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686257 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca969df-0549-4d07-ada4-2e0515419a1d-logs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686404 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-public-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-internal-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.686617 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-combined-ca-bundle\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.697935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.698260 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9hw6\" (UniqueName: \"kubernetes.io/projected/aca969df-0549-4d07-ada4-2e0515419a1d-kube-api-access-m9hw6\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.698539 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.698645 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wgcv\" (UniqueName: \"kubernetes.io/projected/3edd4390-e376-469a-b7c5-9bd7bf9dd100-kube-api-access-6wgcv\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.690055 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca969df-0549-4d07-ada4-2e0515419a1d-logs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.699545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data-custom\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.704762 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-config-data\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.719642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-combined-ca-bundle\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.722844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-internal-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.728600 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aca969df-0549-4d07-ada4-2e0515419a1d-public-tls-certs\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.742229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9hw6\" (UniqueName: \"kubernetes.io/projected/aca969df-0549-4d07-ada4-2e0515419a1d-kube-api-access-m9hw6\") pod \"barbican-api-5fccfc9568-dvccq\" (UID: \"aca969df-0549-4d07-ada4-2e0515419a1d\") " pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.749184 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.757286 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800558 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800714 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800738 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800769 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.800977 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.819949 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.834040 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.835966 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9bgt9" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844359 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844521 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.844780 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906121 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906176 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906259 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906417 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906564 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906633 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.906649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.908082 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.908592 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.908817 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.909215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.909318 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data" (OuterVolumeSpecName: "config-data") pod "3edd4390-e376-469a-b7c5-9bd7bf9dd100" (UID: "3edd4390-e376-469a-b7c5-9bd7bf9dd100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.909393 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.915099 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.932110 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.946498 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"dnsmasq-dns-848cf88cfc-rrbd5\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.950684 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.953176 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.960960 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.961120 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.961296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-crc55" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.961395 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 14:21:43 crc kubenswrapper[4739]: I0218 14:21:43.971120 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.002176 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.003177 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014146 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014219 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014254 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014283 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014310 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014401 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014503 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014560 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.014795 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3edd4390-e376-469a-b7c5-9bd7bf9dd100-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.017990 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.025080 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.025503 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.026559 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.028102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.042551 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"cinder-scheduler-0\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.053507 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.064854 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.096230 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.118889 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.118948 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.118988 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119097 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119124 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119198 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119238 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119292 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.119364 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.147842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.150755 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.154861 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.161142 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.162139 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.163936 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.166206 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.183895 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.202647 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.221308 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.221349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.221485 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222671 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222845 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.222873 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223016 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223482 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.223951 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.224267 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.224622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.224957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.245566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"neutron-6cb887488-w2vb4\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.251566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"dnsmasq-dns-6578955fd5-grdr9\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.326766 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327116 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327245 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327471 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327580 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.327623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.328242 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.328634 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.334002 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.334611 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.334783 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.346859 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.347597 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"cinder-api-0\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.436510 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" containerID="cri-o://00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" gracePeriod=10 Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.439631 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"8c8032c3a1234bf623502d6fafa31158115ef887ed497b5adb6540ed67e79d70"} Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.439665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765d88ff9c-smd7n" event={"ID":"53848a1c-a5c5-4948-a45f-2ba01bc166ca","Type":"ContainerStarted","Data":"b1fdc945ce1e2ca101cf0efe471251f683dbf0d7225dedc33750f002d3546bd5"} Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.439679 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" event={"ID":"8f41089a-bbe1-4371-9a89-38423dca256c","Type":"ContainerStarted","Data":"9184f0b0a885e5ad488bc2df04052a3a06116040f41b71728ada9eb8430b1f38"} Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.482100 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.493698 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:44 crc kubenswrapper[4739]: I0218 14:21:44.839913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fccfc9568-dvccq"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.159493 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:45 crc kubenswrapper[4739]: W0218 14:21:45.182572 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9387c384_203f_40d3_91d1_9e487b283231.slice/crio-cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b WatchSource:0}: Error finding container cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b: Status 404 returned error can't find the container with id cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.436858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.492758 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" event={"ID":"8f41089a-bbe1-4371-9a89-38423dca256c","Type":"ContainerStarted","Data":"8d637634a7b0c9f9482e853eddb3d4a410c2953d9ecb3bf1fd2eb965271b6f5d"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.514459 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" event={"ID":"9387c384-203f-40d3-91d1-9e487b283231","Type":"ContainerStarted","Data":"cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.524891 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-575dbd86bd-gjcs6" podStartSLOduration=4.002766899 podStartE2EDuration="6.524846051s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="2026-02-18 14:21:40.497175357 +0000 UTC m=+1332.992896279" lastFinishedPulling="2026-02-18 14:21:43.019254509 +0000 UTC m=+1335.514975431" observedRunningTime="2026-02-18 14:21:45.514831596 +0000 UTC m=+1338.010552528" watchObservedRunningTime="2026-02-18 14:21:45.524846051 +0000 UTC m=+1338.020566973" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528377 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" exitCode=0 Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerDied","Data":"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528523 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" event={"ID":"d5e15713-5e2e-4ede-9a0a-231e49dc0deb","Type":"ContainerDied","Data":"ba93d9aa54ac2215f4e253e17d3a5a19152448a81d9e24cc8afa82199ab26e2b"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528541 4739 scope.go:117] "RemoveContainer" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.528698 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-6wx56" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.543013 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fccfc9568-dvccq" event={"ID":"aca969df-0549-4d07-ada4-2e0515419a1d","Type":"ContainerStarted","Data":"2b92b5c4a773c7115b1ef12bec885a140f238d803824f16fa02f5eb967ccfb46"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.543067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fccfc9568-dvccq" event={"ID":"aca969df-0549-4d07-ada4-2e0515419a1d","Type":"ContainerStarted","Data":"e18ab58b38a4e0188d42f9ddcbb74eefa95de7e39125e3c317645fe197ae7d56"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.553011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.567557 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-765d88ff9c-smd7n" event={"ID":"53848a1c-a5c5-4948-a45f-2ba01bc166ca","Type":"ContainerStarted","Data":"ce4fdbc97460f6bbc0626886a9d5a7b302054b8bff4942cc4be8129f35b706ac"} Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.594500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.595070 4739 scope.go:117] "RemoveContainer" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.607860 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-765d88ff9c-smd7n" podStartSLOduration=4.216942487 podStartE2EDuration="6.607840505s" podCreationTimestamp="2026-02-18 14:21:39 +0000 UTC" firstStartedPulling="2026-02-18 14:21:40.667639871 +0000 UTC m=+1333.163360793" lastFinishedPulling="2026-02-18 14:21:43.058537889 +0000 UTC m=+1335.554258811" observedRunningTime="2026-02-18 14:21:45.590998636 +0000 UTC m=+1338.086719558" watchObservedRunningTime="2026-02-18 14:21:45.607840505 +0000 UTC m=+1338.103561427" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.622619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.622670 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623487 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623537 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623563 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.623654 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") pod \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\" (UID: \"d5e15713-5e2e-4ede-9a0a-231e49dc0deb\") " Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.644691 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s" (OuterVolumeSpecName: "kube-api-access-bb87s") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "kube-api-access-bb87s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.680782 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.720686 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.725711 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb87s\" (UniqueName: \"kubernetes.io/projected/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-kube-api-access-bb87s\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.742611 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:21:45 crc kubenswrapper[4739]: W0218 14:21:45.788663 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9337767c_12ba_460b_854a_5c2e69db4a5c.slice/crio-fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef WatchSource:0}: Error finding container fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef: Status 404 returned error can't find the container with id fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.859912 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config" (OuterVolumeSpecName: "config") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.863118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.880775 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.881615 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.883734 4739 scope.go:117] "RemoveContainer" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" Feb 18 14:21:45 crc kubenswrapper[4739]: E0218 14:21:45.889567 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f\": container with ID starting with 00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f not found: ID does not exist" containerID="00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.889611 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f"} err="failed to get container status \"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f\": rpc error: code = NotFound desc = could not find container \"00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f\": container with ID starting with 00193cab62603910a6b8502c9f5166e7c0114e23684d9aeeb080d0ee159c957f not found: ID does not exist" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.889638 4739 scope.go:117] "RemoveContainer" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" Feb 18 14:21:45 crc kubenswrapper[4739]: E0218 14:21:45.894591 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5\": container with ID starting with 40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5 not found: ID does not exist" containerID="40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.894783 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5"} err="failed to get container status \"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5\": rpc error: code = NotFound desc = could not find container \"40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5\": container with ID starting with 40434f5263b59598a9631dc90282fb16565ed994e203e9dee42e52c11d8acad5 not found: ID does not exist" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942052 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942287 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942297 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.942305 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:45 crc kubenswrapper[4739]: I0218 14:21:45.946524 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5e15713-5e2e-4ede-9a0a-231e49dc0deb" (UID: "d5e15713-5e2e-4ede-9a0a-231e49dc0deb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.054609 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5e15713-5e2e-4ede-9a0a-231e49dc0deb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.295432 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.309781 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-6wx56"] Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.432096 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" path="/var/lib/kubelet/pods/d5e15713-5e2e-4ede-9a0a-231e49dc0deb/volumes" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.458447 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.647987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerStarted","Data":"0aa6f9d0113c0aad83b0711a9f1f95a0f189e2ee86406cef9587f35ef42914d9"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.671801 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerStarted","Data":"92e077d54516a226953141815b27472b6e615b27ebdcfef077823d82e467f49d"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.682690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerStarted","Data":"75c0f160662dd962ffd03771a130a555c07977ec30eae95c749c55561113bb84"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.700169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerStarted","Data":"fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.711635 4739 generic.go:334] "Generic (PLEG): container finished" podID="9387c384-203f-40d3-91d1-9e487b283231" containerID="a31ea1ea91692b6f59a18cc45e275b69650e934f0ab9589f21701db6c795a435" exitCode=0 Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.711908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" event={"ID":"9387c384-203f-40d3-91d1-9e487b283231","Type":"ContainerDied","Data":"a31ea1ea91692b6f59a18cc45e275b69650e934f0ab9589f21701db6c795a435"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.748251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fccfc9568-dvccq" event={"ID":"aca969df-0549-4d07-ada4-2e0515419a1d","Type":"ContainerStarted","Data":"5ff2372362de935a341df81a43764833bc8c8d62279f3f1075d4f3ba99ab0802"} Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.750571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.750618 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:46 crc kubenswrapper[4739]: I0218 14:21:46.805725 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fccfc9568-dvccq" podStartSLOduration=3.805706116 podStartE2EDuration="3.805706116s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:46.775006344 +0000 UTC m=+1339.270727276" watchObservedRunningTime="2026-02-18 14:21:46.805706116 +0000 UTC m=+1339.301427038" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.478749 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.604184 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.604632 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.604967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.605158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.605334 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.605479 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") pod \"9387c384-203f-40d3-91d1-9e487b283231\" (UID: \"9387c384-203f-40d3-91d1-9e487b283231\") " Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.624859 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7" (OuterVolumeSpecName: "kube-api-access-4pvw7") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "kube-api-access-4pvw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.648368 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.651909 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.656170 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config" (OuterVolumeSpecName: "config") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.658163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.663936 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9387c384-203f-40d3-91d1-9e487b283231" (UID: "9387c384-203f-40d3-91d1-9e487b283231"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.710663 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.710918 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pvw7\" (UniqueName: \"kubernetes.io/projected/9387c384-203f-40d3-91d1-9e487b283231-kube-api-access-4pvw7\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711035 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711136 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711224 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.711314 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9387c384-203f-40d3-91d1-9e487b283231-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.758684 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" event={"ID":"9387c384-203f-40d3-91d1-9e487b283231","Type":"ContainerDied","Data":"cae25905f7886de9a8d6591b3de408a0cf4ef97bbf64ec076b2b594b6b3a3f4b"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.758735 4739 scope.go:117] "RemoveContainer" containerID="a31ea1ea91692b6f59a18cc45e275b69650e934f0ab9589f21701db6c795a435" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.758905 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rrbd5" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.765890 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.769406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerStarted","Data":"8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.769488 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerStarted","Data":"dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.769512 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.771907 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerStarted","Data":"51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.774277 4739 generic.go:334] "Generic (PLEG): container finished" podID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerID="674be441708c52d00270c7a887278841578e6b9bf30714644be7ecc79213fa7b" exitCode=0 Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.775209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerDied","Data":"674be441708c52d00270c7a887278841578e6b9bf30714644be7ecc79213fa7b"} Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.805499 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cb887488-w2vb4" podStartSLOduration=4.8054794 podStartE2EDuration="4.8054794s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:47.78822117 +0000 UTC m=+1340.283942112" watchObservedRunningTime="2026-02-18 14:21:47.8054794 +0000 UTC m=+1340.301200322" Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.877560 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:47 crc kubenswrapper[4739]: I0218 14:21:47.904642 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rrbd5"] Feb 18 14:21:48 crc kubenswrapper[4739]: I0218 14:21:48.440743 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9387c384-203f-40d3-91d1-9e487b283231" path="/var/lib/kubelet/pods/9387c384-203f-40d3-91d1-9e487b283231/volumes" Feb 18 14:21:48 crc kubenswrapper[4739]: I0218 14:21:48.789884 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerStarted","Data":"52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47"} Feb 18 14:21:48 crc kubenswrapper[4739]: I0218 14:21:48.834101 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podStartSLOduration=5.834080539 podStartE2EDuration="5.834080539s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:48.82747277 +0000 UTC m=+1341.323193702" watchObservedRunningTime="2026-02-18 14:21:48.834080539 +0000 UTC m=+1341.329801461" Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.483779 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.804951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerStarted","Data":"f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8"} Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.805083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.805146 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" containerID="cri-o://f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8" gracePeriod=30 Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.805111 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" containerID="cri-o://51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42" gracePeriod=30 Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.812181 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a"} Feb 18 14:21:49 crc kubenswrapper[4739]: I0218 14:21:49.834060 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.834035267 podStartE2EDuration="5.834035267s" podCreationTimestamp="2026-02-18 14:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:49.82866214 +0000 UTC m=+1342.324383062" watchObservedRunningTime="2026-02-18 14:21:49.834035267 +0000 UTC m=+1342.329756189" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.836270 4739 generic.go:334] "Generic (PLEG): container finished" podID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerID="f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8" exitCode=0 Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.836972 4739 generic.go:334] "Generic (PLEG): container finished" podID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerID="51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42" exitCode=143 Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.836358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerDied","Data":"f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8"} Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.837307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerDied","Data":"51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42"} Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.837323 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f0bb43b5-4e4b-4074-ba67-59ff0d726fab","Type":"ContainerDied","Data":"75c0f160662dd962ffd03771a130a555c07977ec30eae95c749c55561113bb84"} Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.837334 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75c0f160662dd962ffd03771a130a555c07977ec30eae95c749c55561113bb84" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.929884 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77cbbcb957-6xzzv"] Feb 18 14:21:50 crc kubenswrapper[4739]: E0218 14:21:50.930361 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930380 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: E0218 14:21:50.930412 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930419 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" Feb 18 14:21:50 crc kubenswrapper[4739]: E0218 14:21:50.930435 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9387c384-203f-40d3-91d1-9e487b283231" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930441 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9387c384-203f-40d3-91d1-9e487b283231" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930669 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e15713-5e2e-4ede-9a0a-231e49dc0deb" containerName="dnsmasq-dns" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.930693 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9387c384-203f-40d3-91d1-9e487b283231" containerName="init" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.932000 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.940513 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.940698 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 14:21:50 crc kubenswrapper[4739]: I0218 14:21:50.963730 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cbbcb957-6xzzv"] Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.005183 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120859 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.120957 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121014 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121221 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121276 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") pod \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\" (UID: \"f0bb43b5-4e4b-4074-ba67-59ff0d726fab\") " Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-httpd-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121784 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-ovndb-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-internal-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121848 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-public-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121872 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpt2\" (UniqueName: \"kubernetes.io/projected/6225bd93-c14b-4682-8e07-e6ca3cce37c9-kube-api-access-tcpt2\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.121894 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-combined-ca-bundle\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.122033 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.122214 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.122802 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs" (OuterVolumeSpecName: "logs") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.127220 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts" (OuterVolumeSpecName: "scripts") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.127341 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4" (OuterVolumeSpecName: "kube-api-access-25qv4") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "kube-api-access-25qv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.197614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.222740 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data" (OuterVolumeSpecName: "config-data") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224232 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-public-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224312 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpt2\" (UniqueName: \"kubernetes.io/projected/6225bd93-c14b-4682-8e07-e6ca3cce37c9-kube-api-access-tcpt2\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224336 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-combined-ca-bundle\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224430 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-httpd-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224593 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-ovndb-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.224609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-internal-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.225870 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.232580 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0bb43b5-4e4b-4074-ba67-59ff0d726fab" (UID: "f0bb43b5-4e4b-4074-ba67-59ff0d726fab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.233039 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234064 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234091 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25qv4\" (UniqueName: \"kubernetes.io/projected/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-kube-api-access-25qv4\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234106 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234117 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.234389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-combined-ca-bundle\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.237663 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-public-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.241411 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-httpd-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.243640 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-internal-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.248402 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-config\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.249395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpt2\" (UniqueName: \"kubernetes.io/projected/6225bd93-c14b-4682-8e07-e6ca3cce37c9-kube-api-access-tcpt2\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.253081 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6225bd93-c14b-4682-8e07-e6ca3cce37c9-ovndb-tls-certs\") pod \"neutron-77cbbcb957-6xzzv\" (UID: \"6225bd93-c14b-4682-8e07-e6ca3cce37c9\") " pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.315959 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.341264 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0bb43b5-4e4b-4074-ba67-59ff0d726fab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.860517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerStarted","Data":"8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096"} Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.866122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:51 crc kubenswrapper[4739]: I0218 14:21:51.979601 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.038681 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049098 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: E0218 14:21:52.049678 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049698 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" Feb 18 14:21:52 crc kubenswrapper[4739]: E0218 14:21:52.049739 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049746 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049933 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.049975 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" containerName="cinder-api-log" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.051181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.054300 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.062829 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.063018 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.069647 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.091617 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.095729 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096041 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096160 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfxd2\" (UniqueName: \"kubernetes.io/projected/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-kube-api-access-jfxd2\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-scripts\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096322 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-logs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096363 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096405 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.096441 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.117520 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77cbbcb957-6xzzv"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198057 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-logs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198673 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198725 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198756 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198837 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.198886 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199015 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199049 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-logs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199077 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfxd2\" (UniqueName: \"kubernetes.io/projected/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-kube-api-access-jfxd2\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-scripts\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.199458 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.207314 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-public-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.210521 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.211545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.221545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-scripts\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.221944 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.224311 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-config-data-custom\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.227804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfxd2\" (UniqueName: \"kubernetes.io/projected/54fd1c90-48dd-4ae7-b2db-d80aa5f14a24-kube-api-access-jfxd2\") pod \"cinder-api-0\" (UID: \"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24\") " pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.265058 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.435808 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bb43b5-4e4b-4074-ba67-59ff0d726fab" path="/var/lib/kubelet/pods/f0bb43b5-4e4b-4074-ba67-59ff0d726fab/volumes" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.691651 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.831345 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.848131 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.912046 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbbcb957-6xzzv" event={"ID":"6225bd93-c14b-4682-8e07-e6ca3cce37c9","Type":"ContainerStarted","Data":"961d8e82bef200408c26b76fd31c29fe20ffc075389e5f00afb2ee92ae0f1189"} Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.912096 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbbcb957-6xzzv" event={"ID":"6225bd93-c14b-4682-8e07-e6ca3cce37c9","Type":"ContainerStarted","Data":"67e947ef3b29bcee78454f4f059e18f91251978cb6401673d5658e61c52bbcbd"} Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.913657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerStarted","Data":"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530"} Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.914928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:21:52 crc kubenswrapper[4739]: I0218 14:21:52.948643 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.200093226 podStartE2EDuration="11.948622216s" podCreationTimestamp="2026-02-18 14:21:41 +0000 UTC" firstStartedPulling="2026-02-18 14:21:43.600436487 +0000 UTC m=+1336.096157409" lastFinishedPulling="2026-02-18 14:21:51.348965477 +0000 UTC m=+1343.844686399" observedRunningTime="2026-02-18 14:21:52.938082647 +0000 UTC m=+1345.433803579" watchObservedRunningTime="2026-02-18 14:21:52.948622216 +0000 UTC m=+1345.444343148" Feb 18 14:21:53 crc kubenswrapper[4739]: W0218 14:21:53.139326 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54fd1c90_48dd_4ae7_b2db_d80aa5f14a24.slice/crio-d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e WatchSource:0}: Error finding container d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e: Status 404 returned error can't find the container with id d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.961795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24","Type":"ContainerStarted","Data":"609f5a32a8670c7d32b7ced94f4a84aabdf37ad61537acc306cfcce6060bf2f3"} Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.962371 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24","Type":"ContainerStarted","Data":"d5ee795b09ef16c7c27319dfb689bc1d3d39ed090eb8d2c65b2f73acadcc392e"} Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.979686 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerStarted","Data":"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5"} Feb 18 14:21:53 crc kubenswrapper[4739]: I0218 14:21:53.996043 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77cbbcb957-6xzzv" event={"ID":"6225bd93-c14b-4682-8e07-e6ca3cce37c9","Type":"ContainerStarted","Data":"9d23b6f46048d80401e2ee78e9cfb18970e6b85639b275b61df775a20a28387f"} Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.009828 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.053338665 podStartE2EDuration="11.009812334s" podCreationTimestamp="2026-02-18 14:21:43 +0000 UTC" firstStartedPulling="2026-02-18 14:21:45.825379148 +0000 UTC m=+1338.321100070" lastFinishedPulling="2026-02-18 14:21:50.781852817 +0000 UTC m=+1343.277573739" observedRunningTime="2026-02-18 14:21:54.003816331 +0000 UTC m=+1346.499537263" watchObservedRunningTime="2026-02-18 14:21:54.009812334 +0000 UTC m=+1346.505533256" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.052513 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77cbbcb957-6xzzv" podStartSLOduration=4.052496251 podStartE2EDuration="4.052496251s" podCreationTimestamp="2026-02-18 14:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:54.049754231 +0000 UTC m=+1346.545475163" watchObservedRunningTime="2026-02-18 14:21:54.052496251 +0000 UTC m=+1346.548217173" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.203425 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.487164 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.631946 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:21:54 crc kubenswrapper[4739]: I0218 14:21:54.639654 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" containerID="cri-o://0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3" gracePeriod=10 Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.009957 4739 generic.go:334] "Generic (PLEG): container finished" podID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerID="0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3" exitCode=0 Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.010242 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerDied","Data":"0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3"} Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.013782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"54fd1c90-48dd-4ae7-b2db-d80aa5f14a24","Type":"ContainerStarted","Data":"b9c2bcf20b4ec25dfd07a3c36e2b1c886ee7fdef5720dd054cc18f6812ece15e"} Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.013814 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.014011 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.035098 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.035083527 podStartE2EDuration="4.035083527s" podCreationTimestamp="2026-02-18 14:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:21:55.032468411 +0000 UTC m=+1347.528189333" watchObservedRunningTime="2026-02-18 14:21:55.035083527 +0000 UTC m=+1347.530804449" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.548416 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700460 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700817 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.700872 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.743151 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc" (OuterVolumeSpecName: "kube-api-access-w9dzc") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "kube-api-access-w9dzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.790353 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config" (OuterVolumeSpecName: "config") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.796416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.804162 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.804636 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") pod \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\" (UID: \"f4b54fe6-91fa-4ba1-9a4e-135277494a27\") " Feb 18 14:21:55 crc kubenswrapper[4739]: W0218 14:21:55.804755 4739 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f4b54fe6-91fa-4ba1-9a4e-135277494a27/volumes/kubernetes.io~configmap/dns-svc Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.804771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805158 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805178 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805187 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.805200 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9dzc\" (UniqueName: \"kubernetes.io/projected/f4b54fe6-91fa-4ba1-9a4e-135277494a27-kube-api-access-w9dzc\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.806290 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.852247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4b54fe6-91fa-4ba1-9a4e-135277494a27" (UID: "f4b54fe6-91fa-4ba1-9a4e-135277494a27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.907328 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:55 crc kubenswrapper[4739]: I0218 14:21:55.907378 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4b54fe6-91fa-4ba1-9a4e-135277494a27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.030991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" event={"ID":"f4b54fe6-91fa-4ba1-9a4e-135277494a27","Type":"ContainerDied","Data":"6a36c3e7151b6223682be3dc0062f1484a767c13869813b992c048797216d7e7"} Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.031053 4739 scope.go:117] "RemoveContainer" containerID="0fa401e0fef3f9cb42562b511b0eebc5a44973f242c043cd8c922196427d9cb3" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.031216 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.081994 4739 scope.go:117] "RemoveContainer" containerID="31b7ef4c1c644cdbe389fbfc6e7e9e8a47e57aa821f30f4da35de5aa73c5099f" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.082168 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.097401 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-7mcdv"] Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.354359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.431263 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" path="/var/lib/kubelet/pods/f4b54fe6-91fa-4ba1-9a4e-135277494a27/volumes" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.434229 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fccfc9568-dvccq" Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.523395 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.524324 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" containerID="cri-o://7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd" gracePeriod=30 Feb 18 14:21:56 crc kubenswrapper[4739]: I0218 14:21:56.524381 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" containerID="cri-o://2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135" gracePeriod=30 Feb 18 14:21:57 crc kubenswrapper[4739]: I0218 14:21:57.051353 4739 generic.go:334] "Generic (PLEG): container finished" podID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerID="7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd" exitCode=143 Feb 18 14:21:57 crc kubenswrapper[4739]: I0218 14:21:57.052722 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerDied","Data":"7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd"} Feb 18 14:21:59 crc kubenswrapper[4739]: I0218 14:21:59.383424 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 14:21:59 crc kubenswrapper[4739]: I0218 14:21:59.448395 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.093310 4739 generic.go:334] "Generic (PLEG): container finished" podID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerID="2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135" exitCode=0 Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.093600 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" containerID="cri-o://78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" gracePeriod=30 Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.093994 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerDied","Data":"2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135"} Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.094407 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" containerID="cri-o://6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" gracePeriod=30 Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.309007 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-7mcdv" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: i/o timeout" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.547650 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.636911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.637249 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.637824 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.638185 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.638327 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") pod \"064975cb-44bb-44b1-8d99-ea09a947b8b8\" (UID: \"064975cb-44bb-44b1-8d99-ea09a947b8b8\") " Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.637775 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs" (OuterVolumeSpecName: "logs") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.661648 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.674612 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd" (OuterVolumeSpecName: "kube-api-access-dwwjd") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "kube-api-access-dwwjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.677109 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.707736 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data" (OuterVolumeSpecName: "config-data") pod "064975cb-44bb-44b1-8d99-ea09a947b8b8" (UID: "064975cb-44bb-44b1-8d99-ea09a947b8b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741604 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741646 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/064975cb-44bb-44b1-8d99-ea09a947b8b8-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741656 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741664 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/064975cb-44bb-44b1-8d99-ea09a947b8b8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:00 crc kubenswrapper[4739]: I0218 14:22:00.741674 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwwjd\" (UniqueName: \"kubernetes.io/projected/064975cb-44bb-44b1-8d99-ea09a947b8b8-kube-api-access-dwwjd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.107717 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" exitCode=0 Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.107752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerDied","Data":"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5"} Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.112569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4b66db68-ntx7n" event={"ID":"064975cb-44bb-44b1-8d99-ea09a947b8b8","Type":"ContainerDied","Data":"e6e7dfb42369260f31fbf7b2c8b3ddee88d4d1f06f45a187f08b311b7e5a41ef"} Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.112647 4739 scope.go:117] "RemoveContainer" containerID="2602390e342c4e0155ec05397045ae37047581af9665cd9582b1ac532f791135" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.112870 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4b66db68-ntx7n" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.167295 4739 scope.go:117] "RemoveContainer" containerID="7c8f4fc08e3d71e41150f03ab573682f2c49c5142be298c07b7fb3ee868889dd" Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.168252 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:22:01 crc kubenswrapper[4739]: I0218 14:22:01.179632 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b4b66db68-ntx7n"] Feb 18 14:22:02 crc kubenswrapper[4739]: I0218 14:22:02.424669 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" path="/var/lib/kubelet/pods/064975cb-44bb-44b1-8d99-ea09a947b8b8/volumes" Feb 18 14:22:03 crc kubenswrapper[4739]: I0218 14:22:03.027303 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7dff988c46-72t9g" Feb 18 14:22:04 crc kubenswrapper[4739]: I0218 14:22:04.749928 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 14:22:04 crc kubenswrapper[4739]: I0218 14:22:04.903907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:22:04 crc kubenswrapper[4739]: I0218 14:22:04.984845 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65fbfb5b48-rchlc" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.121043 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.178073 4739 generic.go:334] "Generic (PLEG): container finished" podID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" exitCode=0 Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179169 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179504 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerDied","Data":"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530"} Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179569 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d","Type":"ContainerDied","Data":"0aa6f9d0113c0aad83b0711a9f1f95a0f189e2ee86406cef9587f35ef42914d9"} Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.179586 4739 scope.go:117] "RemoveContainer" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.212598 4739 scope.go:117] "RemoveContainer" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.246494 4739 scope.go:117] "RemoveContainer" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.247032 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5\": container with ID starting with 6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5 not found: ID does not exist" containerID="6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.247060 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5"} err="failed to get container status \"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5\": rpc error: code = NotFound desc = could not find container \"6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5\": container with ID starting with 6b3857cf1f0f960d0342ec8d85e746074b5f7ab8e1b946990c68fe79feca3bb5 not found: ID does not exist" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.247081 4739 scope.go:117] "RemoveContainer" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.247311 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530\": container with ID starting with 78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530 not found: ID does not exist" containerID="78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.247329 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530"} err="failed to get container status \"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530\": rpc error: code = NotFound desc = could not find container \"78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530\": container with ID starting with 78dd6aaf42656113fe2e77387f9709600b539f358e9a5fee333cb20e4456c530 not found: ID does not exist" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279397 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279421 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279467 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.279594 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") pod \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\" (UID: \"f06eac39-c0c1-4a36-9e9b-b95d3ef8944d\") " Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.281494 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.288284 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm" (OuterVolumeSpecName: "kube-api-access-vb4bm") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "kube-api-access-vb4bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.290586 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts" (OuterVolumeSpecName: "scripts") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.315603 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4bm\" (UniqueName: \"kubernetes.io/projected/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-kube-api-access-vb4bm\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383357 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383365 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.383373 4739 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.386756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.477228 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.477603 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b4b66db68-ntx7n" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.477797 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data" (OuterVolumeSpecName: "config-data") pod "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" (UID: "f06eac39-c0c1-4a36-9e9b-b95d3ef8944d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.488083 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.488332 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.546532 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.563110 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.590817 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591514 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591600 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591663 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="init" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591713 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="init" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591781 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591835 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.591896 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.591960 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.592023 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592076 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" Feb 18 14:22:05 crc kubenswrapper[4739]: E0218 14:22:05.592155 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592209 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592480 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592543 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="cinder-scheduler" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592613 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" containerName="probe" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592673 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="064975cb-44bb-44b1-8d99-ea09a947b8b8" containerName="barbican-api-log" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.592731 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b54fe6-91fa-4ba1-9a4e-135277494a27" containerName="dnsmasq-dns" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.593878 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.606853 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.606930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.693020 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2bm\" (UniqueName: \"kubernetes.io/projected/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-kube-api-access-9k2bm\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.693289 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.693636 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.694931 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.695249 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.695405 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797331 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797372 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797408 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797481 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.797631 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2bm\" (UniqueName: \"kubernetes.io/projected/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-kube-api-access-9k2bm\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.798789 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.802327 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.802484 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.802994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.805829 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.843008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2bm\" (UniqueName: \"kubernetes.io/projected/ff1a7d36-7f60-40b3-82ee-2fd64f780bc4-kube-api-access-9k2bm\") pod \"cinder-scheduler-0\" (UID: \"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4\") " pod="openstack/cinder-scheduler-0" Feb 18 14:22:05 crc kubenswrapper[4739]: I0218 14:22:05.996388 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 14:22:06 crc kubenswrapper[4739]: I0218 14:22:06.425778 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06eac39-c0c1-4a36-9e9b-b95d3ef8944d" path="/var/lib/kubelet/pods/f06eac39-c0c1-4a36-9e9b-b95d3ef8944d/volumes" Feb 18 14:22:06 crc kubenswrapper[4739]: I0218 14:22:06.547383 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 14:22:06 crc kubenswrapper[4739]: W0218 14:22:06.548438 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1a7d36_7f60_40b3_82ee_2fd64f780bc4.slice/crio-2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748 WatchSource:0}: Error finding container 2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748: Status 404 returned error can't find the container with id 2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748 Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.212583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"2ed3252645f8da01309223746bc76942763bb424ec70ccde2c12d3748c26d748"} Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.908764 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.910578 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.913188 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2g9nj" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.914558 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.914932 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 14:22:07 crc kubenswrapper[4739]: I0218 14:22:07.921652 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047552 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047693 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.047802 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.166361 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.169467 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.169569 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.171020 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.171746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.182523 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.187430 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.210360 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"openstackclient\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.234206 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"c2299ede957a85075e4ce2e2081142ad7971d798d620aa7a55a105b0534976ff"} Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.234263 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665"} Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.236467 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.282352 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.315537 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.340647 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.340625692 podStartE2EDuration="3.340625692s" podCreationTimestamp="2026-02-18 14:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:08.27252026 +0000 UTC m=+1360.768241202" watchObservedRunningTime="2026-02-18 14:22:08.340625692 +0000 UTC m=+1360.836346614" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.377362 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.392955 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.393076 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483082 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483415 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483614 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6699e575-f077-433c-a257-f65f329d6e69-openstack-config\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.483736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjnm\" (UniqueName: \"kubernetes.io/projected/6699e575-f077-433c-a257-f65f329d6e69-kube-api-access-5bjnm\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: E0218 14:22:08.542962 4739 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 18 14:22:08 crc kubenswrapper[4739]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_466767d7-e9c0-4e67-bd56-9c4d53711acb_0(66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185" Netns:"/var/run/netns/7cf92b63-3c80-4fe0-82eb-fafb366a05e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185;K8S_POD_UID=466767d7-e9c0-4e67-bd56-9c4d53711acb" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/466767d7-e9c0-4e67-bd56-9c4d53711acb]: expected pod UID "466767d7-e9c0-4e67-bd56-9c4d53711acb" but got "6699e575-f077-433c-a257-f65f329d6e69" from Kube API Feb 18 14:22:08 crc kubenswrapper[4739]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 14:22:08 crc kubenswrapper[4739]: > Feb 18 14:22:08 crc kubenswrapper[4739]: E0218 14:22:08.543028 4739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 18 14:22:08 crc kubenswrapper[4739]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_466767d7-e9c0-4e67-bd56-9c4d53711acb_0(66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185" Netns:"/var/run/netns/7cf92b63-3c80-4fe0-82eb-fafb366a05e2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=66923b8b57bd1b731657055ab4b2f8367f02d978f3d726d9107f6ca70adda185;K8S_POD_UID=466767d7-e9c0-4e67-bd56-9c4d53711acb" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/466767d7-e9c0-4e67-bd56-9c4d53711acb]: expected pod UID "466767d7-e9c0-4e67-bd56-9c4d53711acb" but got "6699e575-f077-433c-a257-f65f329d6e69" from Kube API Feb 18 14:22:08 crc kubenswrapper[4739]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 18 14:22:08 crc kubenswrapper[4739]: > pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587081 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjnm\" (UniqueName: \"kubernetes.io/projected/6699e575-f077-433c-a257-f65f329d6e69-kube-api-access-5bjnm\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587386 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.587550 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6699e575-f077-433c-a257-f65f329d6e69-openstack-config\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.589591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6699e575-f077-433c-a257-f65f329d6e69-openstack-config\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.592000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.592030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6699e575-f077-433c-a257-f65f329d6e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.606918 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjnm\" (UniqueName: \"kubernetes.io/projected/6699e575-f077-433c-a257-f65f329d6e69-kube-api-access-5bjnm\") pod \"openstackclient\" (UID: \"6699e575-f077-433c-a257-f65f329d6e69\") " pod="openstack/openstackclient" Feb 18 14:22:08 crc kubenswrapper[4739]: I0218 14:22:08.792477 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.264772 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.312794 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.327999 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="466767d7-e9c0-4e67-bd56-9c4d53711acb" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410230 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410741 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.410933 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") pod \"466767d7-e9c0-4e67-bd56-9c4d53711acb\" (UID: \"466767d7-e9c0-4e67-bd56-9c4d53711acb\") " Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.412534 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.426011 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.443673 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn" (OuterVolumeSpecName: "kube-api-access-8dfpn") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "kube-api-access-8dfpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.445422 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.445704 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "466767d7-e9c0-4e67-bd56-9c4d53711acb" (UID: "466767d7-e9c0-4e67-bd56-9c4d53711acb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516233 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516265 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dfpn\" (UniqueName: \"kubernetes.io/projected/466767d7-e9c0-4e67-bd56-9c4d53711acb-kube-api-access-8dfpn\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516274 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466767d7-e9c0-4e67-bd56-9c4d53711acb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:09 crc kubenswrapper[4739]: I0218 14:22:09.516282 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/466767d7-e9c0-4e67-bd56-9c4d53711acb-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.276670 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6699e575-f077-433c-a257-f65f329d6e69","Type":"ContainerStarted","Data":"5628848a561d84934ef8f4ff8e31d05cc9adb299e3e936d904e4e46f12cca2c1"} Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.276705 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.300437 4739 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="466767d7-e9c0-4e67-bd56-9c4d53711acb" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.422766 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466767d7-e9c0-4e67-bd56-9c4d53711acb" path="/var/lib/kubelet/pods/466767d7-e9c0-4e67-bd56-9c4d53711acb/volumes" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.781634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.783551 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.787804 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gcstc" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.788332 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.788373 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.793798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.831578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.896982 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.904363 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.907594 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908323 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908456 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.908554 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.924128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.932514 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.933087 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.934328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.935969 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.941066 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.943345 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.944993 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.948705 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.954362 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.964753 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"heat-engine-8c9d795d5-hcnvm\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:10 crc kubenswrapper[4739]: I0218 14:22:10.965886 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:10.998735 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015051 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015123 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015191 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015253 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015271 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015323 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015406 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015433 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.015510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.121478 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.123847 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.124064 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.124977 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.125279 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.125656 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.125893 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126595 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126743 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.126876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.127063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.127166 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.128593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.128883 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.131710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.131957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.132391 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.136954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.137248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.139827 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.152813 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.153842 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.161995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"dnsmasq-dns-688b9f5b49-qh25b\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.189622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"heat-cfnapi-74f6568664-l6ffq\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.193849 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.195214 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"heat-api-6b54c68f9b-f929d\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.340191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.353399 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.364185 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.665157 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-668fffc447-mjpk7"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.677690 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.681792 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.682021 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.682179 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.686023 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-668fffc447-mjpk7"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrl68\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-kube-api-access-qrl68\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750084 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-run-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-internal-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750275 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-combined-ca-bundle\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750383 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-public-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750477 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-config-data\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750715 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-log-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.750892 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-etc-swift\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.853076 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856464 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-etc-swift\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856577 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrl68\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-kube-api-access-qrl68\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856625 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-run-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856649 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-internal-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-combined-ca-bundle\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856740 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-public-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-config-data\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.856843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-log-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.857476 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-log-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.864247 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-etc-swift\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.864675 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-run-httpd\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.865625 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-combined-ca-bundle\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.865772 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-internal-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.866892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-public-tls-certs\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.868696 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-config-data\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.882869 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrl68\" (UniqueName: \"kubernetes.io/projected/ac478be7-1c16-4a7f-a2d2-618cfe76c3d3-kube-api-access-qrl68\") pod \"swift-proxy-668fffc447-mjpk7\" (UID: \"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3\") " pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:11 crc kubenswrapper[4739]: I0218 14:22:11.972660 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.010650 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.205670 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.375720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerStarted","Data":"93bc0594ac2cecd77e6e563c92943e92190081b3c713021d84dd28fc365b4b5c"} Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.444087 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.444132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerStarted","Data":"6ad816951b3fbde1a7196efd13d5a85b80b684bb992e88915048b9d53fd1030f"} Feb 18 14:22:12 crc kubenswrapper[4739]: I0218 14:22:12.542213 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.175129 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-668fffc447-mjpk7"] Feb 18 14:22:13 crc kubenswrapper[4739]: W0218 14:22:13.183266 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac478be7_1c16_4a7f_a2d2_618cfe76c3d3.slice/crio-f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e WatchSource:0}: Error finding container f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e: Status 404 returned error can't find the container with id f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.433211 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-668fffc447-mjpk7" event={"ID":"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3","Type":"ContainerStarted","Data":"f80907fc3d581ffe9545b2921e4a44bc53a12330f66c2c1c723c34fba1d3d34e"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.436899 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerStarted","Data":"d5d149d08742d33f66584c180e4bcc703eac2ef7429ac5a118311ff5b7b3d10b"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.439741 4739 generic.go:334] "Generic (PLEG): container finished" podID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerID="8b70db3067c947ac9fe93c9c738cc56e4ed6885f9ff81677596f72e6844d09b7" exitCode=0 Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.439792 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerDied","Data":"8b70db3067c947ac9fe93c9c738cc56e4ed6885f9ff81677596f72e6844d09b7"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.444412 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerStarted","Data":"d6095d355750dd1b26e4a1ff757c91ef13850ef0b2d9531d6b4d28aeda570b18"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.451484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerStarted","Data":"82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990"} Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.451720 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:13 crc kubenswrapper[4739]: I0218 14:22:13.515816 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8c9d795d5-hcnvm" podStartSLOduration=3.515792127 podStartE2EDuration="3.515792127s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:13.488133302 +0000 UTC m=+1365.983854224" watchObservedRunningTime="2026-02-18 14:22:13.515792127 +0000 UTC m=+1366.011513059" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.338944 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.490874 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-668fffc447-mjpk7" event={"ID":"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3","Type":"ContainerStarted","Data":"d33ba6d2fba2c16b217add56ea86461084ffe6ea392032a84c7ade474f0d269f"} Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.490922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-668fffc447-mjpk7" event={"ID":"ac478be7-1c16-4a7f-a2d2-618cfe76c3d3","Type":"ContainerStarted","Data":"152e44b826638725bb05d54cee25cb071243e222f5966d179642cf8cc599da0e"} Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.492251 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.492289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.541083 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-668fffc447-mjpk7" podStartSLOduration=3.541064603 podStartE2EDuration="3.541064603s" podCreationTimestamp="2026-02-18 14:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:14.534742934 +0000 UTC m=+1367.030463856" watchObservedRunningTime="2026-02-18 14:22:14.541064603 +0000 UTC m=+1367.036785525" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.551683 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerStarted","Data":"38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180"} Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.551740 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:14 crc kubenswrapper[4739]: I0218 14:22:14.594603 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" podStartSLOduration=4.594578139 podStartE2EDuration="4.594578139s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:14.572825992 +0000 UTC m=+1367.068546924" watchObservedRunningTime="2026-02-18 14:22:14.594578139 +0000 UTC m=+1367.090299071" Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.201000 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.205791 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" containerID="cri-o://fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.206311 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" containerID="cri-o://8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.206384 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" containerID="cri-o://8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.206433 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" containerID="cri-o://1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432" gracePeriod=30 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.457226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587220 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096" exitCode=0 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587248 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a" exitCode=2 Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587574 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096"} Feb 18 14:22:16 crc kubenswrapper[4739]: I0218 14:22:16.587727 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a"} Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.659926 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.662200 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.684133 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.686040 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.699422 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.701129 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.712272 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.721809 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.733571 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845423 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845750 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845786 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845828 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845863 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845889 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845905 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.845928 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.846056 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.846162 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.846212 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.948879 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949050 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949210 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949227 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949286 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949366 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949431 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949542 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949692 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.949727 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.964181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.976016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.977316 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.979856 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.980918 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.982272 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:17 crc kubenswrapper[4739]: I0218 14:22:17.986138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"heat-engine-cf66499c9-k855m\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.000207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"heat-api-577d8f6468-htsrs\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.013069 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.021715 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-668fffc447-mjpk7" podUID="ac478be7-1c16-4a7f-a2d2-618cfe76c3d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.189192 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.189635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.189999 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.197687 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"heat-cfnapi-78dd4688df-l25nk\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.284867 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.348424 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.610901 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432" exitCode=0 Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.611179 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerID="fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091" exitCode=0 Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.610953 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432"} Feb 18 14:22:18 crc kubenswrapper[4739]: I0218 14:22:18.611223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091"} Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.143361 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.164465 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.189138 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.191191 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.196908 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.197144 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207736 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207823 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.207862 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.208019 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.221282 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.223290 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.230579 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.245602 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.245863 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.269499 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.314965 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315250 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315280 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.315302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318290 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318382 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318411 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318476 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.318529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.367891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.368089 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.368362 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.384383 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.388000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.389100 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"heat-api-59f4cc7b48-2kzkr\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420264 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420371 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420427 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420524 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.420634 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.427166 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.427543 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.428995 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.430315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.431199 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.458381 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"heat-cfnapi-84d894dcf4-4xbcm\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.557874 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:20 crc kubenswrapper[4739]: I0218 14:22:20.573249 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.342421 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.350567 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77cbbcb957-6xzzv" Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.528522 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.528785 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" containerID="cri-o://52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47" gracePeriod=10 Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.539823 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.540167 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb887488-w2vb4" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" containerID="cri-o://dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d" gracePeriod=30 Feb 18 14:22:21 crc kubenswrapper[4739]: I0218 14:22:21.540384 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cb887488-w2vb4" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" containerID="cri-o://8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01" gracePeriod=30 Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.022962 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.032306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-668fffc447-mjpk7" Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.667338 4739 generic.go:334] "Generic (PLEG): container finished" podID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerID="8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01" exitCode=0 Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.667398 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerDied","Data":"8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01"} Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.669975 4739 generic.go:334] "Generic (PLEG): container finished" podID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerID="52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47" exitCode=0 Feb 18 14:22:22 crc kubenswrapper[4739]: I0218 14:22:22.670107 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerDied","Data":"52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47"} Feb 18 14:22:24 crc kubenswrapper[4739]: I0218 14:22:24.483207 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Feb 18 14:22:28 crc kubenswrapper[4739]: I0218 14:22:28.759036 4739 generic.go:334] "Generic (PLEG): container finished" podID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerID="dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d" exitCode=0 Feb 18 14:22:28 crc kubenswrapper[4739]: I0218 14:22:28.759680 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerDied","Data":"dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d"} Feb 18 14:22:28 crc kubenswrapper[4739]: E0218 14:22:28.771659 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 18 14:22:28 crc kubenswrapper[4739]: E0218 14:22:28.771814 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b8h678hffh68dhb5h656h679h666h699h5dch5fbh5cdh654h58fh9h5b4h9ch5fdh5b7h5b4h584h698h665h9dh8dh5bfh6dh59dh65fh594h56fhf8q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bjnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(6699e575-f077-433c-a257-f65f329d6e69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:22:28 crc kubenswrapper[4739]: E0218 14:22:28.773171 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.282427 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.461310 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.461816 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.461841 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.464825 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.464898 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.464980 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.465002 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") pod \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\" (UID: \"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc\") " Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.466672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.469105 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.473757 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8" (OuterVolumeSpecName: "kube-api-access-xblb8") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "kube-api-access-xblb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.482077 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts" (OuterVolumeSpecName: "scripts") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.483788 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.546848 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572123 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572702 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572720 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572730 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.572744 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xblb8\" (UniqueName: \"kubernetes.io/projected/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-kube-api-access-xblb8\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.646607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.674343 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.775988 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data" (OuterVolumeSpecName: "config-data") pod "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" (UID: "b2736bc1-34ac-4fe9-aa6a-c0af249e1acc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.777949 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.824272 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.825230 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2736bc1-34ac-4fe9-aa6a-c0af249e1acc","Type":"ContainerDied","Data":"8c8032c3a1234bf623502d6fafa31158115ef887ed497b5adb6540ed67e79d70"} Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.825284 4739 scope.go:117] "RemoveContainer" containerID="8fee94e5c0f5f5f60603f0d079f34bec83f00648183f659c017f17757a2ba096" Feb 18 14:22:29 crc kubenswrapper[4739]: E0218 14:22:29.830377 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="6699e575-f077-433c-a257-f65f329d6e69" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.895704 4739 scope.go:117] "RemoveContainer" containerID="8b75480f249109a9022e9ab32c8f19bcca001a279e1f76a25451ad0745c9106a" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.923608 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.947953 4739 scope.go:117] "RemoveContainer" containerID="1a8fca3cd8abe9648355c8b1fc41f8b7bfe5f0fd27b741bbf92fafac2053e432" Feb 18 14:22:29 crc kubenswrapper[4739]: I0218 14:22:29.978488 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.006529 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007218 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007245 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007262 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007271 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007293 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007302 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: E0218 14:22:30.007322 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007330 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007633 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-notification-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007670 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="sg-core" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007704 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="ceilometer-central-agent" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.007718 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" containerName="proxy-httpd" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.010330 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.014473 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.014605 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.015725 4739 scope.go:117] "RemoveContainer" containerID="fee11676261091cbd3ef8b82bd38773fb586e3f02824dcfdf641b5fbd18e0091" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.030530 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095469 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095538 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095589 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.095644 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201010 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201626 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201771 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201811 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.201832 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.204754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.205120 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.208281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.208994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.211431 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.211523 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.233129 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"ceilometer-0\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.338019 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.358492 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.460559 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2736bc1-34ac-4fe9-aa6a-c0af249e1acc" path="/var/lib/kubelet/pods/b2736bc1-34ac-4fe9-aa6a-c0af249e1acc/volumes" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.497709 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.546076 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623075 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623158 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623219 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623240 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623262 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623310 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623334 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623387 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623492 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623587 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") pod \"9337767c-12ba-460b-854a-5c2e69db4a5c\" (UID: \"9337767c-12ba-460b-854a-5c2e69db4a5c\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.623646 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") pod \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\" (UID: \"7e8a55f3-28f4-46da-bc87-6d16902b2dba\") " Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.686010 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.704042 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2" (OuterVolumeSpecName: "kube-api-access-ghkr2") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "kube-api-access-ghkr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.729850 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2" (OuterVolumeSpecName: "kube-api-access-ltdp2") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "kube-api-access-ltdp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.744504 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghkr2\" (UniqueName: \"kubernetes.io/projected/7e8a55f3-28f4-46da-bc87-6d16902b2dba-kube-api-access-ghkr2\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.744556 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.744572 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltdp2\" (UniqueName: \"kubernetes.io/projected/9337767c-12ba-460b-854a-5c2e69db4a5c-kube-api-access-ltdp2\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.937562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerStarted","Data":"34402e3be46581b4f11650c5f4f2ec4f1afe7d82b3230635fe9430959d1f9c69"} Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.976128 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cb887488-w2vb4" event={"ID":"7e8a55f3-28f4-46da-bc87-6d16902b2dba","Type":"ContainerDied","Data":"92e077d54516a226953141815b27472b6e615b27ebdcfef077823d82e467f49d"} Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.976193 4739 scope.go:117] "RemoveContainer" containerID="8dd2b9302e6dd8b8a788c6130228739df1a58a6ee1a8d8355dc5ab489138ee01" Feb 18 14:22:30 crc kubenswrapper[4739]: I0218 14:22:30.976378 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cb887488-w2vb4" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.007786 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config" (OuterVolumeSpecName: "config") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.008081 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-74f6568664-l6ffq" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" containerID="cri-o://cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4" gracePeriod=60 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.008177 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerStarted","Data":"cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4"} Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.008227 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.055932 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.065196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.067362 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.067760 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-grdr9" event={"ID":"9337767c-12ba-460b-854a-5c2e69db4a5c","Type":"ContainerDied","Data":"fa732d1eda4ac1c7763b996c5ef44f9b843ec150eee66ab022f29219cacb77ef"} Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.078003 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74f6568664-l6ffq" podStartSLOduration=4.160515717 podStartE2EDuration="21.077976615s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="2026-02-18 14:22:12.462520908 +0000 UTC m=+1364.958241830" lastFinishedPulling="2026-02-18 14:22:29.379981806 +0000 UTC m=+1381.875702728" observedRunningTime="2026-02-18 14:22:31.054489204 +0000 UTC m=+1383.550210126" watchObservedRunningTime="2026-02-18 14:22:31.077976615 +0000 UTC m=+1383.573697537" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.084977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.116462 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerStarted","Data":"311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61"} Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.124637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.116599 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6b54c68f9b-f929d" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" containerID="cri-o://311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61" gracePeriod=60 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.172682 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.172736 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.247727 4739 scope.go:117] "RemoveContainer" containerID="dac67b364bafdc30f9188f9edb3326eeba8fe15953fcbfe0ae9864e55228745d" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.267198 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.268512 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.276584 4739 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.302897 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:22:31 crc kubenswrapper[4739]: W0218 14:22:31.313004 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54b11ed9_a528_468d_ad77_89ee83d042c5.slice/crio-3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1 WatchSource:0}: Error finding container 3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1: Status 404 returned error can't find the container with id 3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.336885 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e8a55f3-28f4-46da-bc87-6d16902b2dba" (UID: "7e8a55f3-28f4-46da-bc87-6d16902b2dba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.370007 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: W0218 14:22:31.371329 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418a2d42_e21e_4d0d_b295_3178e079431c.slice/crio-a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4 WatchSource:0}: Error finding container a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4: Status 404 returned error can't find the container with id a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4 Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.381727 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.384302 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e8a55f3-28f4-46da-bc87-6d16902b2dba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.393192 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config" (OuterVolumeSpecName: "config") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.394946 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.419023 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.428263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.437540 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6b54c68f9b-f929d" podStartSLOduration=4.518114306 podStartE2EDuration="21.437521334s" podCreationTimestamp="2026-02-18 14:22:10 +0000 UTC" firstStartedPulling="2026-02-18 14:22:12.460557708 +0000 UTC m=+1364.956278640" lastFinishedPulling="2026-02-18 14:22:29.379964756 +0000 UTC m=+1381.875685668" observedRunningTime="2026-02-18 14:22:31.145585324 +0000 UTC m=+1383.641306266" watchObservedRunningTime="2026-02-18 14:22:31.437521334 +0000 UTC m=+1383.933242256" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.480325 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.486537 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.575666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9337767c-12ba-460b-854a-5c2e69db4a5c" (UID: "9337767c-12ba-460b-854a-5c2e69db4a5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.588368 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9337767c-12ba-460b-854a-5c2e69db4a5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.632871 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.640577 4739 scope.go:117] "RemoveContainer" containerID="52b68e08b4643ed4bb44ac6b88f494d230cc74dfa319d3b1f92462acb959fc47" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.647421 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cb887488-w2vb4"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.763959 4739 scope.go:117] "RemoveContainer" containerID="674be441708c52d00270c7a887278841578e6b9bf30714644be7ecc79213fa7b" Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.810921 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:22:31 crc kubenswrapper[4739]: I0218 14:22:31.822951 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-grdr9"] Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.182645 4739 generic.go:334] "Generic (PLEG): container finished" podID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerID="311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61" exitCode=0 Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.182812 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerDied","Data":"311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.187480 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerStarted","Data":"380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.187521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerStarted","Data":"8eee24ec7f9accbd61a3e88a575fabd5b156dc4338ac144c30c542bf27a434fc"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.188939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.192333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerStarted","Data":"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.192394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerStarted","Data":"a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.192918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.200645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerStarted","Data":"12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.200690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerStarted","Data":"182afb94ab91cf9899a4110a4be4e76e5c04c7d5630670036fcfd2f21cbc8a5f"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.200807 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.206245 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"f49f9c840da6b7b1c2c162adfd6ff58755e7165a8c2d9b23a26c34f3222084fc"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.215744 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerStarted","Data":"b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.215797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerStarted","Data":"3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.216150 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.223951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerStarted","Data":"783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.224088 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.225562 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podStartSLOduration=15.225547105 podStartE2EDuration="15.225547105s" podCreationTimestamp="2026-02-18 14:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.20902085 +0000 UTC m=+1384.704741762" watchObservedRunningTime="2026-02-18 14:22:32.225547105 +0000 UTC m=+1384.721268037" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.230597 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerID="cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4" exitCode=0 Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.230660 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerDied","Data":"cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4"} Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.236536 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-59f4cc7b48-2kzkr" podStartSLOduration=12.236515301 podStartE2EDuration="12.236515301s" podCreationTimestamp="2026-02-18 14:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.230352656 +0000 UTC m=+1384.726073578" watchObservedRunningTime="2026-02-18 14:22:32.236515301 +0000 UTC m=+1384.732236233" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.274002 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" podStartSLOduration=12.273975693 podStartE2EDuration="12.273975693s" podCreationTimestamp="2026-02-18 14:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.26273456 +0000 UTC m=+1384.758455492" watchObservedRunningTime="2026-02-18 14:22:32.273975693 +0000 UTC m=+1384.769696615" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.345487 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-cf66499c9-k855m" podStartSLOduration=15.3454632 podStartE2EDuration="15.3454632s" podCreationTimestamp="2026-02-18 14:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.289955865 +0000 UTC m=+1384.785676787" watchObservedRunningTime="2026-02-18 14:22:32.3454632 +0000 UTC m=+1384.841184142" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.358731 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-577d8f6468-htsrs" podStartSLOduration=15.358708923 podStartE2EDuration="15.358708923s" podCreationTimestamp="2026-02-18 14:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:22:32.342470055 +0000 UTC m=+1384.838190987" watchObservedRunningTime="2026-02-18 14:22:32.358708923 +0000 UTC m=+1384.854429855" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.478880 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" path="/var/lib/kubelet/pods/7e8a55f3-28f4-46da-bc87-6d16902b2dba/volumes" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.479728 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" path="/var/lib/kubelet/pods/9337767c-12ba-460b-854a-5c2e69db4a5c/volumes" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.773357 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.922072 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.952788 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.953099 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.953194 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.953271 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") pod \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\" (UID: \"3a6654bc-87e3-4bd4-9f38-08f64907ea4c\") " Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.970779 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:32 crc kubenswrapper[4739]: I0218 14:22:32.971371 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g" (OuterVolumeSpecName: "kube-api-access-65h5g") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "kube-api-access-65h5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.013687 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.043572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data" (OuterVolumeSpecName: "config-data") pod "3a6654bc-87e3-4bd4-9f38-08f64907ea4c" (UID: "3a6654bc-87e3-4bd4-9f38-08f64907ea4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057754 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057810 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.057852 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") pod \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\" (UID: \"93ebc0dc-ca08-4c3e-bf54-d6530d56c322\") " Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058551 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058579 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65h5g\" (UniqueName: \"kubernetes.io/projected/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-kube-api-access-65h5g\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058593 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.058608 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6654bc-87e3-4bd4-9f38-08f64907ea4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.062232 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn" (OuterVolumeSpecName: "kube-api-access-qh2gn") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "kube-api-access-qh2gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.062554 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.097553 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.139530 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data" (OuterVolumeSpecName: "config-data") pod "93ebc0dc-ca08-4c3e-bf54-d6530d56c322" (UID: "93ebc0dc-ca08-4c3e-bf54-d6530d56c322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161248 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161282 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161292 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.161301 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh2gn\" (UniqueName: \"kubernetes.io/projected/93ebc0dc-ca08-4c3e-bf54-d6530d56c322-kube-api-access-qh2gn\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.247256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b54c68f9b-f929d" event={"ID":"93ebc0dc-ca08-4c3e-bf54-d6530d56c322","Type":"ContainerDied","Data":"d5d149d08742d33f66584c180e4bcc703eac2ef7429ac5a118311ff5b7b3d10b"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.247312 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b54c68f9b-f929d" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.247327 4739 scope.go:117] "RemoveContainer" containerID="311d21994840f4dffed976021db3e086569fe52a37728ae38e5c972914ef7d61" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.249117 4739 generic.go:334] "Generic (PLEG): container finished" podID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerID="b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f" exitCode=1 Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.249182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerDied","Data":"b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.249964 4739 scope.go:117] "RemoveContainer" containerID="b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.255531 4739 generic.go:334] "Generic (PLEG): container finished" podID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerID="380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d" exitCode=1 Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.255609 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerDied","Data":"380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.256280 4739 scope.go:117] "RemoveContainer" containerID="380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.264179 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74f6568664-l6ffq" event={"ID":"3a6654bc-87e3-4bd4-9f38-08f64907ea4c","Type":"ContainerDied","Data":"d6095d355750dd1b26e4a1ff757c91ef13850ef0b2d9531d6b4d28aeda570b18"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.264304 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74f6568664-l6ffq" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.274554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82"} Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.285699 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.290903 4739 scope.go:117] "RemoveContainer" containerID="cff21032675d69321ff58f5bdd004b9b71a78b6909e645fa2ed5105f4cac95f4" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.338190 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.348548 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.358197 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74f6568664-l6ffq"] Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.380941 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:33 crc kubenswrapper[4739]: I0218 14:22:33.387602 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6b54c68f9b-f929d"] Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.288943 4739 generic.go:334] "Generic (PLEG): container finished" podID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" exitCode=1 Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.289303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerDied","Data":"5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e"} Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.289339 4739 scope.go:117] "RemoveContainer" containerID="b5ec72c7a07e63c0579c322c043938266194df6972d26fd4bc42bff8cd2e1b8f" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.290081 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:34 crc kubenswrapper[4739]: E0218 14:22:34.290550 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-577d8f6468-htsrs_openstack(54b11ed9-a528-468d-ad77-89ee83d042c5)\"" pod="openstack/heat-api-577d8f6468-htsrs" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.297464 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerStarted","Data":"9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9"} Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.298752 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.311089 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0"} Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.489761 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" path="/var/lib/kubelet/pods/3a6654bc-87e3-4bd4-9f38-08f64907ea4c/volumes" Feb 18 14:22:34 crc kubenswrapper[4739]: I0218 14:22:34.492769 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" path="/var/lib/kubelet/pods/93ebc0dc-ca08-4c3e-bf54-d6530d56c322/volumes" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.324760 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:35 crc kubenswrapper[4739]: E0218 14:22:35.325511 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-577d8f6468-htsrs_openstack(54b11ed9-a528-468d-ad77-89ee83d042c5)\"" pod="openstack/heat-api-577d8f6468-htsrs" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326582 4739 generic.go:334] "Generic (PLEG): container finished" podID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" exitCode=1 Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326665 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerDied","Data":"9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9"} Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326732 4739 scope.go:117] "RemoveContainer" containerID="380597d90d5bc9556e8ce886d3f60776a514e8cf358489da36b8633a600f819d" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.326927 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:35 crc kubenswrapper[4739]: E0218 14:22:35.327153 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4688df-l25nk_openstack(d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3)\"" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" Feb 18 14:22:35 crc kubenswrapper[4739]: I0218 14:22:35.342023 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc"} Feb 18 14:22:36 crc kubenswrapper[4739]: I0218 14:22:36.367973 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:36 crc kubenswrapper[4739]: E0218 14:22:36.368418 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4688df-l25nk_openstack(d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3)\"" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" Feb 18 14:22:36 crc kubenswrapper[4739]: I0218 14:22:36.645691 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382034 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerStarted","Data":"bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f"} Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382317 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382273 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" containerID="cri-o://be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382430 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" containerID="cri-o://bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382596 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" containerID="cri-o://c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.382707 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" containerID="cri-o://fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc" gracePeriod=30 Feb 18 14:22:37 crc kubenswrapper[4739]: I0218 14:22:37.427203 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.176661707 podStartE2EDuration="8.427180394s" podCreationTimestamp="2026-02-18 14:22:29 +0000 UTC" firstStartedPulling="2026-02-18 14:22:31.431869432 +0000 UTC m=+1383.927590354" lastFinishedPulling="2026-02-18 14:22:36.682388119 +0000 UTC m=+1389.178109041" observedRunningTime="2026-02-18 14:22:37.412211357 +0000 UTC m=+1389.907932289" watchObservedRunningTime="2026-02-18 14:22:37.427180394 +0000 UTC m=+1389.922901316" Feb 18 14:22:37 crc kubenswrapper[4739]: E0218 14:22:37.776365 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9138cdd_fae9_4563_8fea_43df3f704da4.slice/crio-conmon-bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.285961 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.286286 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.287258 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:38 crc kubenswrapper[4739]: E0218 14:22:38.287724 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-577d8f6468-htsrs_openstack(54b11ed9-a528-468d-ad77-89ee83d042c5)\"" pod="openstack/heat-api-577d8f6468-htsrs" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.348946 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.349932 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:38 crc kubenswrapper[4739]: E0218 14:22:38.350233 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-78dd4688df-l25nk_openstack(d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3)\"" pod="openstack/heat-cfnapi-78dd4688df-l25nk" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407124 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f" exitCode=0 Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f"} Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc"} Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407164 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc" exitCode=2 Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407242 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0" exitCode=0 Feb 18 14:22:38 crc kubenswrapper[4739]: I0218 14:22:38.407268 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0"} Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.346478 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.438285 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.855552 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.946548 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:42 crc kubenswrapper[4739]: I0218 14:22:42.979086 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106036 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106183 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106366 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.106475 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") pod \"54b11ed9-a528-468d-ad77-89ee83d042c5\" (UID: \"54b11ed9-a528-468d-ad77-89ee83d042c5\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.126491 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.130758 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84" (OuterVolumeSpecName: "kube-api-access-n4p84") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "kube-api-access-n4p84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.165697 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.189866 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data" (OuterVolumeSpecName: "config-data") pod "54b11ed9-a528-468d-ad77-89ee83d042c5" (UID: "54b11ed9-a528-468d-ad77-89ee83d042c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209209 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4p84\" (UniqueName: \"kubernetes.io/projected/54b11ed9-a528-468d-ad77-89ee83d042c5-kube-api-access-n4p84\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209248 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209262 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.209273 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b11ed9-a528-468d-ad77-89ee83d042c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.458582 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.471184 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-577d8f6468-htsrs" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.471209 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-577d8f6468-htsrs" event={"ID":"54b11ed9-a528-468d-ad77-89ee83d042c5","Type":"ContainerDied","Data":"3101800e54f402ed2a23af1f3aee27b29f2d43b6c77b8db6cee81f7af07674c1"} Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.471274 4739 scope.go:117] "RemoveContainer" containerID="5cf720d6d82a8fdcee902a65e6abed05831183c32a19f4922279e2fdc100479e" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.473405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-78dd4688df-l25nk" event={"ID":"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3","Type":"ContainerDied","Data":"8eee24ec7f9accbd61a3e88a575fabd5b156dc4338ac144c30c542bf27a434fc"} Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.473502 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-78dd4688df-l25nk" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.515336 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.515724 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.515955 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.516726 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") pod \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\" (UID: \"d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3\") " Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.524209 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.527058 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn" (OuterVolumeSpecName: "kube-api-access-7hlgn") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "kube-api-access-7hlgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.538691 4739 scope.go:117] "RemoveContainer" containerID="9eb167e13f280e70cd4100e9f1d09f6c5779edb56e4e0177a7914a8b965455f9" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.542487 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.569018 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-577d8f6468-htsrs"] Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.583195 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.619764 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.619806 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.619820 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hlgn\" (UniqueName: \"kubernetes.io/projected/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-kube-api-access-7hlgn\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.620924 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data" (OuterVolumeSpecName: "config-data") pod "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" (UID: "d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.722526 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.828078 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:43 crc kubenswrapper[4739]: I0218 14:22:43.847626 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-78dd4688df-l25nk"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.427535 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" path="/var/lib/kubelet/pods/54b11ed9-a528-468d-ad77-89ee83d042c5/volumes" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.429025 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" path="/var/lib/kubelet/pods/d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3/volumes" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.490394 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.493086 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.493110 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.493926 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.493980 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494025 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494058 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494074 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494087 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494097 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494107 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494117 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494123 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494145 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494153 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494199 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494207 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.494221 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="init" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.494226 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="init" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495046 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495068 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495082 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ebc0dc-ca08-4c3e-bf54-d6530d56c322" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495099 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6654bc-87e3-4bd4-9f38-08f64907ea4c" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495110 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b11ed9-a528-468d-ad77-89ee83d042c5" containerName="heat-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495122 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-api" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495130 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9337767c-12ba-460b-854a-5c2e69db4a5c" containerName="dnsmasq-dns" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.495138 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e8a55f3-28f4-46da-bc87-6d16902b2dba" containerName="neutron-httpd" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.496184 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.540007 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.542081 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.542243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.614983 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:22:44 crc kubenswrapper[4739]: E0218 14:22:44.615649 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.615671 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.615939 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fb76ba-5339-4ae5-b2e2-fc4f0cf74fb3" containerName="heat-cfnapi" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.616795 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.640003 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644010 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.644243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.645108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.684798 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"nova-api-db-create-frlf8\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.714335 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.716054 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.742525 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.744924 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746280 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746698 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.746737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.747795 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.753288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.755111 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.776263 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.781716 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"nova-cell0-db-create-6q6nn\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.822223 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849354 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.849392 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.850649 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.894380 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"nova-cell1-db-create-79vbk\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.910135 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.911900 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.918838 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.944586 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952153 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952213 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.952349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.954645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.965912 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:22:44 crc kubenswrapper[4739]: I0218 14:22:44.981201 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"nova-api-04e8-account-create-update-9qcd6\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.055892 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.056359 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.057174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.146057 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.159204 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.160433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"nova-cell0-022d-account-create-update-6krg8\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.346162 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.348683 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.381842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.409578 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.468165 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.468295 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.561226 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.571193 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.571287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.575047 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.610239 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"nova-cell1-8ab4-account-create-update-zkq89\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:45 crc kubenswrapper[4739]: I0218 14:22:45.666735 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.292462 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:22:46 crc kubenswrapper[4739]: W0218 14:22:46.329796 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod290b50b0_4283_4a40_b694_4a5f18b39b1a.slice/crio-097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5 WatchSource:0}: Error finding container 097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5: Status 404 returned error can't find the container with id 097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5 Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.559657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frlf8" event={"ID":"290b50b0-4283-4a40-b694-4a5f18b39b1a","Type":"ContainerStarted","Data":"097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5"} Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.577761 4739 generic.go:334] "Generic (PLEG): container finished" podID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerID="be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82" exitCode=0 Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.577842 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82"} Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.582067 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6699e575-f077-433c-a257-f65f329d6e69","Type":"ContainerStarted","Data":"cf3227a54466fa6eb6ab918b31a85c454ca4079bc1e21308ecac7a95552305d2"} Feb 18 14:22:46 crc kubenswrapper[4739]: I0218 14:22:46.608932 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.016979257 podStartE2EDuration="38.608910525s" podCreationTimestamp="2026-02-18 14:22:08 +0000 UTC" firstStartedPulling="2026-02-18 14:22:09.462723852 +0000 UTC m=+1361.958444764" lastFinishedPulling="2026-02-18 14:22:45.05465511 +0000 UTC m=+1397.550376032" observedRunningTime="2026-02-18 14:22:46.597654282 +0000 UTC m=+1399.093375214" watchObservedRunningTime="2026-02-18 14:22:46.608910525 +0000 UTC m=+1399.104631457" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.035646 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.049837 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.059020 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.081801 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.100519 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.175376 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259178 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259533 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259614 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259697 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259751 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.259803 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") pod \"f9138cdd-fae9-4563-8fea-43df3f704da4\" (UID: \"f9138cdd-fae9-4563-8fea-43df3f704da4\") " Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.261392 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.264855 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.275824 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts" (OuterVolumeSpecName: "scripts") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.299643 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt" (OuterVolumeSpecName: "kube-api-access-6bldt") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "kube-api-access-6bldt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364261 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364659 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364877 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9138cdd-fae9-4563-8fea-43df3f704da4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.364894 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bldt\" (UniqueName: \"kubernetes.io/projected/f9138cdd-fae9-4563-8fea-43df3f704da4-kube-api-access-6bldt\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.454668 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.467583 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.500363 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.570210 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.595427 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9138cdd-fae9-4563-8fea-43df3f704da4","Type":"ContainerDied","Data":"f49f9c840da6b7b1c2c162adfd6ff58755e7165a8c2d9b23a26c34f3222084fc"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.595698 4739 scope.go:117] "RemoveContainer" containerID="bf09f7375dec60e9ddd87c7e406660d9c06618a91075b3c56a79c613de250d4f" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.595779 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.599223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-022d-account-create-update-6krg8" event={"ID":"429115da-eb66-4dc9-9210-86cd0525a6cf","Type":"ContainerStarted","Data":"8f643b3c3825709517f6c978998cc8e0df337adc7bd11bd8db7809c215c86a97"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.602144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79vbk" event={"ID":"c33399d1-a28e-4e19-aba8-a218018e5e8b","Type":"ContainerStarted","Data":"a45f373a8cda7e2f9713f9d9f6800072809e946454ad4d5c72a1b5d375df9110"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.618900 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04e8-account-create-update-9qcd6" event={"ID":"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd","Type":"ContainerStarted","Data":"9cd1858f14ae85d0f7063cb271cc274976ef43ee00a4a7c3fd47f776b8e0b625"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.628313 4739 generic.go:334] "Generic (PLEG): container finished" podID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerID="164ed4c991352152994d527ba5112c6e7d1903b4f2261af5e3d479652dee7c0f" exitCode=0 Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.628650 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frlf8" event={"ID":"290b50b0-4283-4a40-b694-4a5f18b39b1a","Type":"ContainerDied","Data":"164ed4c991352152994d527ba5112c6e7d1903b4f2261af5e3d479652dee7c0f"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.642704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6q6nn" event={"ID":"f689babc-92f9-4e45-8fb3-40722e18cd10","Type":"ContainerStarted","Data":"cac8e29f4124c59125768bbdfaf1b58c6ed47894b8741c363bc06a9800327c90"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.646835 4739 scope.go:117] "RemoveContainer" containerID="fc1c03ec69e9592ccc3a7f657270ef2ff69bf15bfec1f8afdeef655e026a5dcc" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.681736 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" event={"ID":"1f229688-5021-4d28-9109-98071744a102","Type":"ContainerStarted","Data":"8af9a41bdf06992c533dfc886a1664fdb3c42e54989216dae72b122b1c38a89d"} Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.701016 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data" (OuterVolumeSpecName: "config-data") pod "f9138cdd-fae9-4563-8fea-43df3f704da4" (UID: "f9138cdd-fae9-4563-8fea-43df3f704da4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.725621 4739 scope.go:117] "RemoveContainer" containerID="c96d27898d93129b2467e8305f0c2d0db08996645c837c128b9af6d8943220a0" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.775797 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9138cdd-fae9-4563-8fea-43df3f704da4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.785539 4739 scope.go:117] "RemoveContainer" containerID="be93e2023094d77daeb6b0949f4fa4b335efb2b640defae52fa9227796359a82" Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.966045 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:47 crc kubenswrapper[4739]: I0218 14:22:47.980229 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.007811 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008333 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008350 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008368 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008374 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008399 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008407 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" Feb 18 14:22:48 crc kubenswrapper[4739]: E0218 14:22:48.008434 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008454 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008644 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="sg-core" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008663 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-notification-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008682 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="proxy-httpd" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.008690 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" containerName="ceilometer-central-agent" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.012870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.015766 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.015887 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.027995 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.109946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.110014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.110092 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.110119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.126550 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.194994 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.195233 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-8c9d795d5-hcnvm" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" containerID="cri-o://82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" gracePeriod=60 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212148 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212241 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212293 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212353 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212468 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.212528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.213301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.214250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.223781 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.232714 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.236591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.236994 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.246746 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.372133 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.436220 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9138cdd-fae9-4563-8fea-43df3f704da4" path="/var/lib/kubelet/pods/f9138cdd-fae9-4563-8fea-43df3f704da4/volumes" Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.728814 4739 generic.go:334] "Generic (PLEG): container finished" podID="1f229688-5021-4d28-9109-98071744a102" containerID="cd193d9c848f0cb5846f4803a361ea578be3e4975f2d687992d1efc73cd54125" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.729214 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" event={"ID":"1f229688-5021-4d28-9109-98071744a102","Type":"ContainerDied","Data":"cd193d9c848f0cb5846f4803a361ea578be3e4975f2d687992d1efc73cd54125"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.763278 4739 generic.go:334] "Generic (PLEG): container finished" podID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerID="f2e4b9fb06b8dfc6962768e47edc73a399125a6a5af8a24a17fe6e665b490f62" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.764855 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-022d-account-create-update-6krg8" event={"ID":"429115da-eb66-4dc9-9210-86cd0525a6cf","Type":"ContainerDied","Data":"f2e4b9fb06b8dfc6962768e47edc73a399125a6a5af8a24a17fe6e665b490f62"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.787416 4739 generic.go:334] "Generic (PLEG): container finished" podID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerID="d354c12b67eababcd672627661526374e41cf79bf2c5f51fc2d961512732ad80" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.787557 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79vbk" event={"ID":"c33399d1-a28e-4e19-aba8-a218018e5e8b","Type":"ContainerDied","Data":"d354c12b67eababcd672627661526374e41cf79bf2c5f51fc2d961512732ad80"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.817439 4739 generic.go:334] "Generic (PLEG): container finished" podID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerID="c294346ed483351749b57b335bfd04c525dff76c2eb0efbc4e1ea2d1c1b22ce8" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.817736 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04e8-account-create-update-9qcd6" event={"ID":"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd","Type":"ContainerDied","Data":"c294346ed483351749b57b335bfd04c525dff76c2eb0efbc4e1ea2d1c1b22ce8"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.827884 4739 generic.go:334] "Generic (PLEG): container finished" podID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerID="f180991429bb7c01f25e8e0932cfc4a2c2e639764155f5051da2395874ce4177" exitCode=0 Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.828232 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6q6nn" event={"ID":"f689babc-92f9-4e45-8fb3-40722e18cd10","Type":"ContainerDied","Data":"f180991429bb7c01f25e8e0932cfc4a2c2e639764155f5051da2395874ce4177"} Feb 18 14:22:48 crc kubenswrapper[4739]: I0218 14:22:48.964879 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.369302 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.557916 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") pod \"290b50b0-4283-4a40-b694-4a5f18b39b1a\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.558236 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") pod \"290b50b0-4283-4a40-b694-4a5f18b39b1a\" (UID: \"290b50b0-4283-4a40-b694-4a5f18b39b1a\") " Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.559003 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "290b50b0-4283-4a40-b694-4a5f18b39b1a" (UID: "290b50b0-4283-4a40-b694-4a5f18b39b1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.563666 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99" (OuterVolumeSpecName: "kube-api-access-2xh99") pod "290b50b0-4283-4a40-b694-4a5f18b39b1a" (UID: "290b50b0-4283-4a40-b694-4a5f18b39b1a"). InnerVolumeSpecName "kube-api-access-2xh99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.675169 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xh99\" (UniqueName: \"kubernetes.io/projected/290b50b0-4283-4a40-b694-4a5f18b39b1a-kube-api-access-2xh99\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.675431 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/290b50b0-4283-4a40-b694-4a5f18b39b1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.841518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645"} Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.841563 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"363130708c866244a3e77f2ddb6aef5f2bdac939b2f9d1e05276302b2523678f"} Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.846780 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-frlf8" Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.847558 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-frlf8" event={"ID":"290b50b0-4283-4a40-b694-4a5f18b39b1a","Type":"ContainerDied","Data":"097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5"} Feb 18 14:22:49 crc kubenswrapper[4739]: I0218 14:22:49.847605 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097e3d288c883a688b610c19c89795649e50652a74832f827c0ffad1589349a5" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.527120 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.721608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") pod \"1f229688-5021-4d28-9109-98071744a102\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.721801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") pod \"1f229688-5021-4d28-9109-98071744a102\" (UID: \"1f229688-5021-4d28-9109-98071744a102\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.723058 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f229688-5021-4d28-9109-98071744a102" (UID: "1f229688-5021-4d28-9109-98071744a102"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.744751 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n" (OuterVolumeSpecName: "kube-api-access-27v8n") pod "1f229688-5021-4d28-9109-98071744a102" (UID: "1f229688-5021-4d28-9109-98071744a102"). InnerVolumeSpecName "kube-api-access-27v8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.773886 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.778612 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.789794 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.830544 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f229688-5021-4d28-9109-98071744a102-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.830632 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27v8n\" (UniqueName: \"kubernetes.io/projected/1f229688-5021-4d28-9109-98071744a102-kube-api-access-27v8n\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.845858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.894167 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" event={"ID":"1f229688-5021-4d28-9109-98071744a102","Type":"ContainerDied","Data":"8af9a41bdf06992c533dfc886a1664fdb3c42e54989216dae72b122b1c38a89d"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.894215 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af9a41bdf06992c533dfc886a1664fdb3c42e54989216dae72b122b1c38a89d" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.894300 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8ab4-account-create-update-zkq89" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.896485 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-022d-account-create-update-6krg8" event={"ID":"429115da-eb66-4dc9-9210-86cd0525a6cf","Type":"ContainerDied","Data":"8f643b3c3825709517f6c978998cc8e0df337adc7bd11bd8db7809c215c86a97"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.896526 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f643b3c3825709517f6c978998cc8e0df337adc7bd11bd8db7809c215c86a97" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.896579 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-022d-account-create-update-6krg8" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.899637 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.912528 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79vbk" event={"ID":"c33399d1-a28e-4e19-aba8-a218018e5e8b","Type":"ContainerDied","Data":"a45f373a8cda7e2f9713f9d9f6800072809e946454ad4d5c72a1b5d375df9110"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.912570 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a45f373a8cda7e2f9713f9d9f6800072809e946454ad4d5c72a1b5d375df9110" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.912637 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79vbk" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.915123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-04e8-account-create-update-9qcd6" event={"ID":"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd","Type":"ContainerDied","Data":"9cd1858f14ae85d0f7063cb271cc274976ef43ee00a4a7c3fd47f776b8e0b625"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.915149 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd1858f14ae85d0f7063cb271cc274976ef43ee00a4a7c3fd47f776b8e0b625" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.915190 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-04e8-account-create-update-9qcd6" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.917147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6q6nn" event={"ID":"f689babc-92f9-4e45-8fb3-40722e18cd10","Type":"ContainerDied","Data":"cac8e29f4124c59125768bbdfaf1b58c6ed47894b8741c363bc06a9800327c90"} Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.917170 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac8e29f4124c59125768bbdfaf1b58c6ed47894b8741c363bc06a9800327c90" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.917208 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6q6nn" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.936866 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") pod \"f689babc-92f9-4e45-8fb3-40722e18cd10\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.936979 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") pod \"f689babc-92f9-4e45-8fb3-40722e18cd10\" (UID: \"f689babc-92f9-4e45-8fb3-40722e18cd10\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937093 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") pod \"c33399d1-a28e-4e19-aba8-a218018e5e8b\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937121 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") pod \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937206 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") pod \"429115da-eb66-4dc9-9210-86cd0525a6cf\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937272 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") pod \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\" (UID: \"1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937314 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f689babc-92f9-4e45-8fb3-40722e18cd10" (UID: "f689babc-92f9-4e45-8fb3-40722e18cd10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937333 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") pod \"c33399d1-a28e-4e19-aba8-a218018e5e8b\" (UID: \"c33399d1-a28e-4e19-aba8-a218018e5e8b\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.937395 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") pod \"429115da-eb66-4dc9-9210-86cd0525a6cf\" (UID: \"429115da-eb66-4dc9-9210-86cd0525a6cf\") " Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.938502 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f689babc-92f9-4e45-8fb3-40722e18cd10-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.938518 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" (UID: "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.939016 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c33399d1-a28e-4e19-aba8-a218018e5e8b" (UID: "c33399d1-a28e-4e19-aba8-a218018e5e8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.939080 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "429115da-eb66-4dc9-9210-86cd0525a6cf" (UID: "429115da-eb66-4dc9-9210-86cd0525a6cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.945775 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh" (OuterVolumeSpecName: "kube-api-access-qdnwh") pod "429115da-eb66-4dc9-9210-86cd0525a6cf" (UID: "429115da-eb66-4dc9-9210-86cd0525a6cf"). InnerVolumeSpecName "kube-api-access-qdnwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.945828 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq" (OuterVolumeSpecName: "kube-api-access-4wsrq") pod "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" (UID: "1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd"). InnerVolumeSpecName "kube-api-access-4wsrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.947582 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5" (OuterVolumeSpecName: "kube-api-access-g6gb5") pod "c33399d1-a28e-4e19-aba8-a218018e5e8b" (UID: "c33399d1-a28e-4e19-aba8-a218018e5e8b"). InnerVolumeSpecName "kube-api-access-g6gb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:50 crc kubenswrapper[4739]: I0218 14:22:50.949738 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj" (OuterVolumeSpecName: "kube-api-access-5n9gj") pod "f689babc-92f9-4e45-8fb3-40722e18cd10" (UID: "f689babc-92f9-4e45-8fb3-40722e18cd10"). InnerVolumeSpecName "kube-api-access-5n9gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040494 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040529 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6gb5\" (UniqueName: \"kubernetes.io/projected/c33399d1-a28e-4e19-aba8-a218018e5e8b-kube-api-access-g6gb5\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040544 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdnwh\" (UniqueName: \"kubernetes.io/projected/429115da-eb66-4dc9-9210-86cd0525a6cf-kube-api-access-qdnwh\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040554 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n9gj\" (UniqueName: \"kubernetes.io/projected/f689babc-92f9-4e45-8fb3-40722e18cd10-kube-api-access-5n9gj\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040564 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c33399d1-a28e-4e19-aba8-a218018e5e8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040573 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wsrq\" (UniqueName: \"kubernetes.io/projected/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd-kube-api-access-4wsrq\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.040581 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/429115da-eb66-4dc9-9210-86cd0525a6cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.131982 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.133129 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.134182 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:22:51 crc kubenswrapper[4739]: E0218 14:22:51.134226 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8c9d795d5-hcnvm" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:22:51 crc kubenswrapper[4739]: I0218 14:22:51.953694 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c"} Feb 18 14:22:53 crc kubenswrapper[4739]: I0218 14:22:53.977406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerStarted","Data":"4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd"} Feb 18 14:22:53 crc kubenswrapper[4739]: I0218 14:22:53.977976 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:22:53 crc kubenswrapper[4739]: I0218 14:22:53.999882 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.675379856 podStartE2EDuration="6.999856295s" podCreationTimestamp="2026-02-18 14:22:47 +0000 UTC" firstStartedPulling="2026-02-18 14:22:48.97825544 +0000 UTC m=+1401.473976362" lastFinishedPulling="2026-02-18 14:22:53.302731879 +0000 UTC m=+1405.798452801" observedRunningTime="2026-02-18 14:22:53.995167097 +0000 UTC m=+1406.490888029" watchObservedRunningTime="2026-02-18 14:22:53.999856295 +0000 UTC m=+1406.495577217" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182404 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.182913 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182931 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.182953 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182961 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.182985 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.182994 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.183013 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183021 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.183036 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183043 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: E0218 14:22:55.183075 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f229688-5021-4d28-9109-98071744a102" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183083 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f229688-5021-4d28-9109-98071744a102" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183327 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183344 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183359 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183376 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183395 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f229688-5021-4d28-9109-98071744a102" containerName="mariadb-account-create-update" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.183409 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" containerName="mariadb-database-create" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.184374 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.192936 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.201731 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r74ht" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.201913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.222200 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.339932 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.340038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.340420 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.340881 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.443697 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.444799 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.444859 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.444973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.450050 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.452812 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.466168 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.466172 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xfg9d\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:55 crc kubenswrapper[4739]: I0218 14:22:55.506946 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:22:56 crc kubenswrapper[4739]: I0218 14:22:56.358740 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:22:57 crc kubenswrapper[4739]: I0218 14:22:57.075627 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerStarted","Data":"40ea49f88d331b4c7e345388fbc286ebb3f9c3af1caee046df2e917b02eb12a7"} Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.139689 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.143072 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.144759 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:23:01 crc kubenswrapper[4739]: E0218 14:23:01.144802 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8c9d795d5-hcnvm" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:04 crc kubenswrapper[4739]: I0218 14:23:04.206797 4739 generic.go:334] "Generic (PLEG): container finished" podID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" exitCode=0 Feb 18 14:23:04 crc kubenswrapper[4739]: I0218 14:23:04.207250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerDied","Data":"82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990"} Feb 18 14:23:06 crc kubenswrapper[4739]: I0218 14:23:06.891019 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043301 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043429 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043572 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.043641 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") pod \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\" (UID: \"48f5a3e4-7bee-4689-b7b8-5869536bebb6\") " Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.057890 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.057941 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4" (OuterVolumeSpecName: "kube-api-access-lczr4") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "kube-api-access-lczr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.090491 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.125714 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data" (OuterVolumeSpecName: "config-data") pod "48f5a3e4-7bee-4689-b7b8-5869536bebb6" (UID: "48f5a3e4-7bee-4689-b7b8-5869536bebb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148201 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148272 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148288 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lczr4\" (UniqueName: \"kubernetes.io/projected/48f5a3e4-7bee-4689-b7b8-5869536bebb6-kube-api-access-lczr4\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.148306 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f5a3e4-7bee-4689-b7b8-5869536bebb6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.251548 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8c9d795d5-hcnvm" event={"ID":"48f5a3e4-7bee-4689-b7b8-5869536bebb6","Type":"ContainerDied","Data":"93bc0594ac2cecd77e6e563c92943e92190081b3c713021d84dd28fc365b4b5c"} Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.251811 4739 scope.go:117] "RemoveContainer" containerID="82135dc6825fa5f144d383addc8105986ce22d1f6d4310421f2ea3bc7b02b990" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.251675 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8c9d795d5-hcnvm" Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.309575 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:23:07 crc kubenswrapper[4739]: I0218 14:23:07.326973 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-8c9d795d5-hcnvm"] Feb 18 14:23:08 crc kubenswrapper[4739]: I0218 14:23:08.264571 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerStarted","Data":"7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91"} Feb 18 14:23:08 crc kubenswrapper[4739]: I0218 14:23:08.283890 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" podStartSLOduration=2.144997534 podStartE2EDuration="13.283868238s" podCreationTimestamp="2026-02-18 14:22:55 +0000 UTC" firstStartedPulling="2026-02-18 14:22:56.389340967 +0000 UTC m=+1408.885061889" lastFinishedPulling="2026-02-18 14:23:07.528211671 +0000 UTC m=+1420.023932593" observedRunningTime="2026-02-18 14:23:08.278565815 +0000 UTC m=+1420.774286737" watchObservedRunningTime="2026-02-18 14:23:08.283868238 +0000 UTC m=+1420.779589160" Feb 18 14:23:08 crc kubenswrapper[4739]: I0218 14:23:08.425439 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" path="/var/lib/kubelet/pods/48f5a3e4-7bee-4689-b7b8-5869536bebb6/volumes" Feb 18 14:23:09 crc kubenswrapper[4739]: I0218 14:23:09.881801 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:09 crc kubenswrapper[4739]: I0218 14:23:09.882592 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" containerID="cri-o://7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" gracePeriod=30 Feb 18 14:23:09 crc kubenswrapper[4739]: I0218 14:23:09.882722 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" containerID="cri-o://55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" gracePeriod=30 Feb 18 14:23:10 crc kubenswrapper[4739]: I0218 14:23:10.294978 4739 generic.go:334] "Generic (PLEG): container finished" podID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" exitCode=143 Feb 18 14:23:10 crc kubenswrapper[4739]: I0218 14:23:10.295040 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerDied","Data":"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768"} Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.017228 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018163 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" containerID="cri-o://3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018300 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" containerID="cri-o://0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018271 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" containerID="cri-o://4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.018281 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" containerID="cri-o://68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c" gracePeriod=30 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.026768 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.230:3000/\": EOF" Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330789 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd" exitCode=0 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330823 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c" exitCode=2 Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd"} Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.330878 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c"} Feb 18 14:23:13 crc kubenswrapper[4739]: I0218 14:23:13.940727 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106516 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106672 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106790 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106853 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.106964 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.107009 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.107027 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.107079 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") pod \"3677acc3-fd05-4d33-ac6c-aa420ecce125\" (UID: \"3677acc3-fd05-4d33-ac6c-aa420ecce125\") " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.108643 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.108791 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs" (OuterVolumeSpecName: "logs") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.114296 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts" (OuterVolumeSpecName: "scripts") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.119958 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z" (OuterVolumeSpecName: "kube-api-access-92f7z") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "kube-api-access-92f7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.173092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (OuterVolumeSpecName: "glance") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.177974 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209678 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209707 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209720 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92f7z\" (UniqueName: \"kubernetes.io/projected/3677acc3-fd05-4d33-ac6c-aa420ecce125-kube-api-access-92f7z\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209729 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209739 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3677acc3-fd05-4d33-ac6c-aa420ecce125-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.209766 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" " Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.211658 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.236797 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data" (OuterVolumeSpecName: "config-data") pod "3677acc3-fd05-4d33-ac6c-aa420ecce125" (UID: "3677acc3-fd05-4d33-ac6c-aa420ecce125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.265489 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.265681 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-15694efd-23b4-48d1-830b-42bbc6c51b15" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15") on node "crc" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.312109 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.312144 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.312155 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3677acc3-fd05-4d33-ac6c-aa420ecce125-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345232 4739 generic.go:334] "Generic (PLEG): container finished" podID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" exitCode=0 Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345305 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerDied","Data":"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59"} Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345365 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3677acc3-fd05-4d33-ac6c-aa420ecce125","Type":"ContainerDied","Data":"3a259073ef5437a741c7e7a8473f57ccd05a34b5954be95c2003c50962d48fb6"} Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.345386 4739 scope.go:117] "RemoveContainer" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.349273 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645" exitCode=0 Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.349393 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645"} Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.387495 4739 scope.go:117] "RemoveContainer" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.392667 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.406967 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.433563 4739 scope.go:117] "RemoveContainer" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.434870 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" path="/var/lib/kubelet/pods/3677acc3-fd05-4d33-ac6c-aa420ecce125/volumes" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.435917 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.436515 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.436608 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.436698 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.436753 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.436819 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.436865 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.437146 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f5a3e4-7bee-4689-b7b8-5869536bebb6" containerName="heat-engine" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.437227 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-log" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.437290 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3677acc3-fd05-4d33-ac6c-aa420ecce125" containerName="glance-httpd" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.438988 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59\": container with ID starting with 55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59 not found: ID does not exist" containerID="55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.439043 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59"} err="failed to get container status \"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59\": rpc error: code = NotFound desc = could not find container \"55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59\": container with ID starting with 55d7fa09ae1a32ca9f34dfa2b3d84d9b02e24f72c62bc041fa875a620d2e0b59 not found: ID does not exist" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.439081 4739 scope.go:117] "RemoveContainer" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.440826 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: E0218 14:23:14.442164 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768\": container with ID starting with 7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768 not found: ID does not exist" containerID="7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.442310 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768"} err="failed to get container status \"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768\": rpc error: code = NotFound desc = could not find container \"7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768\": container with ID starting with 7628b5173857fee787a0e47df61d568f61946e02c484b8144866ca881703b768 not found: ID does not exist" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.443299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.444544 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.445869 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620564 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620706 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-logs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620825 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkgxg\" (UniqueName: \"kubernetes.io/projected/43f517de-033c-467c-9937-df5706ee1ca2-kube-api-access-jkgxg\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.620943 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621075 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621211 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621286 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.621325 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.723624 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.724343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-logs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.725878 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkgxg\" (UniqueName: \"kubernetes.io/projected/43f517de-033c-467c-9937-df5706ee1ca2-kube-api-access-jkgxg\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726012 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726160 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-logs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43f517de-033c-467c-9937-df5706ee1ca2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.726552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.731732 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.737545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.742420 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.743609 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43f517de-033c-467c-9937-df5706ee1ca2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.769469 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkgxg\" (UniqueName: \"kubernetes.io/projected/43f517de-033c-467c-9937-df5706ee1ca2-kube-api-access-jkgxg\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.880668 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.880719 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0bd6abac90ebac69ac03837941e4aa1820f14a49ea1b1fe31e1dd216b0487447/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:14 crc kubenswrapper[4739]: I0218 14:23:14.936870 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-15694efd-23b4-48d1-830b-42bbc6c51b15\") pod \"glance-default-internal-api-0\" (UID: \"43f517de-033c-467c-9937-df5706ee1ca2\") " pod="openstack/glance-default-internal-api-0" Feb 18 14:23:15 crc kubenswrapper[4739]: I0218 14:23:15.062684 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:15 crc kubenswrapper[4739]: I0218 14:23:15.859135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.395468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43f517de-033c-467c-9937-df5706ee1ca2","Type":"ContainerStarted","Data":"d4df0f8d4267d4f85121acc8729cb17a2c6b7020a08109840e7ed6bb94cca088"} Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.916394 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.918516 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" containerID="cri-o://2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999" gracePeriod=30 Feb 18 14:23:16 crc kubenswrapper[4739]: I0218 14:23:16.918776 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" containerID="cri-o://c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96" gracePeriod=30 Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.414245 4739 generic.go:334] "Generic (PLEG): container finished" podID="043c7e92-488e-4581-b683-a50c6f3e4262" containerID="0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b" exitCode=0 Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.414927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b"} Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.415024 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"043c7e92-488e-4581-b683-a50c6f3e4262","Type":"ContainerDied","Data":"363130708c866244a3e77f2ddb6aef5f2bdac939b2f9d1e05276302b2523678f"} Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.415083 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="363130708c866244a3e77f2ddb6aef5f2bdac939b2f9d1e05276302b2523678f" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.419091 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerID="2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999" exitCode=143 Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.419147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerDied","Data":"2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999"} Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.456155 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601816 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601892 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601929 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.601985 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602188 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602248 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602294 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") pod \"043c7e92-488e-4581-b683-a50c6f3e4262\" (UID: \"043c7e92-488e-4581-b683-a50c6f3e4262\") " Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602364 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.602714 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.603417 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.603450 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/043c7e92-488e-4581-b683-a50c6f3e4262-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.610922 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx" (OuterVolumeSpecName: "kube-api-access-zcscx") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "kube-api-access-zcscx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.614970 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts" (OuterVolumeSpecName: "scripts") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.651621 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.705450 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcscx\" (UniqueName: \"kubernetes.io/projected/043c7e92-488e-4581-b683-a50c6f3e4262-kube-api-access-zcscx\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.705491 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.705504 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.763433 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.808225 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.819966 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data" (OuterVolumeSpecName: "config-data") pod "043c7e92-488e-4581-b683-a50c6f3e4262" (UID: "043c7e92-488e-4581-b683-a50c6f3e4262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:17 crc kubenswrapper[4739]: I0218 14:23:17.911269 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/043c7e92-488e-4581-b683-a50c6f3e4262-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.473566 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.473675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43f517de-033c-467c-9937-df5706ee1ca2","Type":"ContainerStarted","Data":"0f0231204125bc2b20fc4b72eed91b2aaa5b163d221cc452723692c9b26fe987"} Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.531071 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.543497 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.572432 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573078 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573093 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573144 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573151 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573161 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573168 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: E0218 14:23:18.573184 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573189 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573479 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-notification-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573513 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="ceilometer-central-agent" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573527 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="proxy-httpd" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.573540 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" containerName="sg-core" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.576372 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.579260 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.579510 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.586692 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760330 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760433 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760509 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760715 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.760957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.761063 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.862909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.862970 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.862994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863109 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863177 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.863432 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.864211 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.870622 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.871564 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.872839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.876213 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.887412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"ceilometer-0\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " pod="openstack/ceilometer-0" Feb 18 14:23:18 crc kubenswrapper[4739]: I0218 14:23:18.921237 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:19 crc kubenswrapper[4739]: I0218 14:23:19.485569 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:19 crc kubenswrapper[4739]: I0218 14:23:19.491917 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"43f517de-033c-467c-9937-df5706ee1ca2","Type":"ContainerStarted","Data":"e308311384c510554abf8ba314d9b8cc54b782be943c166fcd6bb31d51a1056b"} Feb 18 14:23:19 crc kubenswrapper[4739]: I0218 14:23:19.520221 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.520174906 podStartE2EDuration="5.520174906s" podCreationTimestamp="2026-02-18 14:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:19.515026497 +0000 UTC m=+1432.010747419" watchObservedRunningTime="2026-02-18 14:23:19.520174906 +0000 UTC m=+1432.015895838" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.433729 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043c7e92-488e-4581-b683-a50c6f3e4262" path="/var/lib/kubelet/pods/043c7e92-488e-4581-b683-a50c6f3e4262/volumes" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.510987 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"15625072c38b1bf8ecb9484d34cda1baf8e1ed5006b99a1e19bebfe35acb6921"} Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.515186 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerID="c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96" exitCode=0 Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.515617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerDied","Data":"c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96"} Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.844335 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.934968 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.935437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.935493 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.937692 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.937743 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.937856 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.938117 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.939254 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs" (OuterVolumeSpecName: "logs") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.942869 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.943041 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") pod \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\" (UID: \"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad\") " Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.944116 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.944134 4739 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.947197 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts" (OuterVolumeSpecName: "scripts") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.958677 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27" (OuterVolumeSpecName: "kube-api-access-6fd27") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "kube-api-access-6fd27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:20 crc kubenswrapper[4739]: I0218 14:23:20.971572 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (OuterVolumeSpecName: "glance") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.029743 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047514 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047581 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" " Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047596 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fd27\" (UniqueName: \"kubernetes.io/projected/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-kube-api-access-6fd27\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.047607 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.063847 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data" (OuterVolumeSpecName: "config-data") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.097556 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.097730 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b") on node "crc" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.111570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" (UID: "f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.149830 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.149884 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.149900 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.529304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad","Type":"ContainerDied","Data":"a83503aad1227f8256e1acb3ea10be6b3f0c314a395eb1f234c642acb0b7ab14"} Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.529352 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.529912 4739 scope.go:117] "RemoveContainer" containerID="c780b2636e91712d69d355da22c8be023ac8a48eb8e209ca36fa75cd60964d96" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.531970 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24"} Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.578057 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.591033 4739 scope.go:117] "RemoveContainer" containerID="2ac1313ffdbad15c09d0bb7f2a4d1b596f72ac62a6780cb62e70fa5559b8c999" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.604114 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.627965 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: E0218 14:23:21.628627 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.628643 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" Feb 18 14:23:21 crc kubenswrapper[4739]: E0218 14:23:21.628683 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.628692 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.628983 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-log" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.629001 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" containerName="glance-httpd" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.630566 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.644244 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.650005 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.650792 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780434 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780533 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmckw\" (UniqueName: \"kubernetes.io/projected/ac763f9f-5faa-4559-8d07-960b3d30566b-kube-api-access-wmckw\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780722 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780779 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.780946 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.781003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.781235 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.781357 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-logs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883276 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmckw\" (UniqueName: \"kubernetes.io/projected/ac763f9f-5faa-4559-8d07-960b3d30566b-kube-api-access-wmckw\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883324 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883343 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883399 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883517 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.883559 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-logs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.884212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-logs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.890097 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac763f9f-5faa-4559-8d07-960b3d30566b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.891554 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.903202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.903811 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.904369 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.904436 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f742b1b3d6273dd3375e0e5a76a4c01f047ef0c4f7f8765a09ef674c2c3b6349/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.906832 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac763f9f-5faa-4559-8d07-960b3d30566b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:21 crc kubenswrapper[4739]: I0218 14:23:21.983401 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmckw\" (UniqueName: \"kubernetes.io/projected/ac763f9f-5faa-4559-8d07-960b3d30566b-kube-api-access-wmckw\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.008852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c49996c7-a6b2-4100-a1cb-c41fc0bda59b\") pod \"glance-default-external-api-0\" (UID: \"ac763f9f-5faa-4559-8d07-960b3d30566b\") " pod="openstack/glance-default-external-api-0" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.263119 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.430990 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad" path="/var/lib/kubelet/pods/f5b6ca41-d34e-4ef9-b04c-4de7a50b71ad/volumes" Feb 18 14:23:22 crc kubenswrapper[4739]: I0218 14:23:22.547334 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91"} Feb 18 14:23:23 crc kubenswrapper[4739]: I0218 14:23:23.282988 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 14:23:23 crc kubenswrapper[4739]: I0218 14:23:23.578385 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245"} Feb 18 14:23:23 crc kubenswrapper[4739]: I0218 14:23:23.585911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac763f9f-5faa-4559-8d07-960b3d30566b","Type":"ContainerStarted","Data":"c74a46caf197dedd9c2d715d0c17a4b9a9979871dc62b59f2ca4c7377645f255"} Feb 18 14:23:24 crc kubenswrapper[4739]: I0218 14:23:24.600052 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac763f9f-5faa-4559-8d07-960b3d30566b","Type":"ContainerStarted","Data":"dc445577715fa089d238a4ffcca852ad90b61274bed53ae3ff7724fb515dd60a"} Feb 18 14:23:24 crc kubenswrapper[4739]: I0218 14:23:24.600107 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac763f9f-5faa-4559-8d07-960b3d30566b","Type":"ContainerStarted","Data":"68090b2b8e5037afa342f3250c6bbea61b080100f9dc5b387dc68386a59fc3a1"} Feb 18 14:23:24 crc kubenswrapper[4739]: I0218 14:23:24.642770 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.642744086 podStartE2EDuration="3.642744086s" podCreationTimestamp="2026-02-18 14:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:24.637176717 +0000 UTC m=+1437.132897649" watchObservedRunningTime="2026-02-18 14:23:24.642744086 +0000 UTC m=+1437.138465018" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.063719 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.063782 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.102907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.115285 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.621904 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:25 crc kubenswrapper[4739]: I0218 14:23:25.627779 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.288486 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.635990 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerStarted","Data":"89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2"} Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.636285 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:23:26 crc kubenswrapper[4739]: I0218 14:23:26.672265 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.836954701 podStartE2EDuration="8.672248838s" podCreationTimestamp="2026-02-18 14:23:18 +0000 UTC" firstStartedPulling="2026-02-18 14:23:19.48289257 +0000 UTC m=+1431.978613502" lastFinishedPulling="2026-02-18 14:23:25.318186717 +0000 UTC m=+1437.813907639" observedRunningTime="2026-02-18 14:23:26.667759876 +0000 UTC m=+1439.163480808" watchObservedRunningTime="2026-02-18 14:23:26.672248838 +0000 UTC m=+1439.167969760" Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647189 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647231 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647345 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" containerID="cri-o://7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24" gracePeriod=30 Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647420 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" containerID="cri-o://89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2" gracePeriod=30 Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647503 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" containerID="cri-o://e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245" gracePeriod=30 Feb 18 14:23:27 crc kubenswrapper[4739]: I0218 14:23:27.647551 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" containerID="cri-o://da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91" gracePeriod=30 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.070027 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.082726 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.660212 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2" exitCode=0 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.660510 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245" exitCode=2 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.660521 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91" exitCode=0 Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.661550 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2"} Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.661581 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245"} Feb 18 14:23:28 crc kubenswrapper[4739]: I0218 14:23:28.661593 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91"} Feb 18 14:23:29 crc kubenswrapper[4739]: I0218 14:23:29.373583 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:23:29 crc kubenswrapper[4739]: I0218 14:23:29.373661 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:23:31 crc kubenswrapper[4739]: I0218 14:23:31.694280 4739 generic.go:334] "Generic (PLEG): container finished" podID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerID="7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91" exitCode=0 Feb 18 14:23:31 crc kubenswrapper[4739]: I0218 14:23:31.694667 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerDied","Data":"7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91"} Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.264271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.264336 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.304747 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.326578 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.708573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:23:32 crc kubenswrapper[4739]: I0218 14:23:32.708823 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.168250 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308228 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308553 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.308616 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") pod \"2ed7afcd-a9be-4c59-836d-355e4c502a01\" (UID: \"2ed7afcd-a9be-4c59-836d-355e4c502a01\") " Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.318955 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668" (OuterVolumeSpecName: "kube-api-access-fq668") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "kube-api-access-fq668". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.320560 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts" (OuterVolumeSpecName: "scripts") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.342695 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data" (OuterVolumeSpecName: "config-data") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.343951 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ed7afcd-a9be-4c59-836d-355e4c502a01" (UID: "2ed7afcd-a9be-4c59-836d-355e4c502a01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.411959 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq668\" (UniqueName: \"kubernetes.io/projected/2ed7afcd-a9be-4c59-836d-355e4c502a01-kube-api-access-fq668\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.412165 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.412175 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.412185 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ed7afcd-a9be-4c59-836d-355e4c502a01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.720919 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.720910 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xfg9d" event={"ID":"2ed7afcd-a9be-4c59-836d-355e4c502a01","Type":"ContainerDied","Data":"40ea49f88d331b4c7e345388fbc286ebb3f9c3af1caee046df2e917b02eb12a7"} Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.721658 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ea49f88d331b4c7e345388fbc286ebb3f9c3af1caee046df2e917b02eb12a7" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.734799 4739 generic.go:334] "Generic (PLEG): container finished" podID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerID="7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24" exitCode=0 Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.736495 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24"} Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.854365 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:23:33 crc kubenswrapper[4739]: E0218 14:23:33.855380 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerName="nova-cell0-conductor-db-sync" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.855573 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerName="nova-cell0-conductor-db-sync" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.855998 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" containerName="nova-cell0-conductor-db-sync" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.856939 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.861167 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.861191 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r74ht" Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.870190 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:23:33 crc kubenswrapper[4739]: I0218 14:23:33.977221 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.033850 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.034166 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9q9\" (UniqueName: \"kubernetes.io/projected/c35bd35d-d228-4223-a207-ea164d0c6b23-kube-api-access-4k9q9\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.034227 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136353 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136432 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136529 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136777 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") pod \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\" (UID: \"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40\") " Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.136918 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137265 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137273 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9q9\" (UniqueName: \"kubernetes.io/projected/c35bd35d-d228-4223-a207-ea164d0c6b23-kube-api-access-4k9q9\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137428 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.137754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.138073 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.138094 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.142207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.143137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts" (OuterVolumeSpecName: "scripts") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.143190 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp" (OuterVolumeSpecName: "kube-api-access-s8hkp") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "kube-api-access-s8hkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.156315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9q9\" (UniqueName: \"kubernetes.io/projected/c35bd35d-d228-4223-a207-ea164d0c6b23-kube-api-access-4k9q9\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.167099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c35bd35d-d228-4223-a207-ea164d0c6b23-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c35bd35d-d228-4223-a207-ea164d0c6b23\") " pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.177698 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.199369 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.241276 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.241318 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.241334 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8hkp\" (UniqueName: \"kubernetes.io/projected/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-kube-api-access-s8hkp\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.262530 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.306351 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data" (OuterVolumeSpecName: "config-data") pod "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" (UID: "b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.356033 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.356070 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.684802 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: W0218 14:23:34.685749 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc35bd35d_d228_4223_a207_ea164d0c6b23.slice/crio-1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3 WatchSource:0}: Error finding container 1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3: Status 404 returned error can't find the container with id 1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3 Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.748978 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40","Type":"ContainerDied","Data":"15625072c38b1bf8ecb9484d34cda1baf8e1ed5006b99a1e19bebfe35acb6921"} Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.749294 4739 scope.go:117] "RemoveContainer" containerID="89235a3b1e9de2c433f31e281b9be507904e71d0aa11e8538f1814a923368ab2" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.749002 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.752635 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c35bd35d-d228-4223-a207-ea164d0c6b23","Type":"ContainerStarted","Data":"1849406a1f2c21c989cb98a31ddc847b8826ecc07a680db977c362877081e9d3"} Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.785539 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.789854 4739 scope.go:117] "RemoveContainer" containerID="e831818a0e3deb50ef385bca26013a078b300adeb8cd0fcfdd387866f339b245" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.806925 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.820801 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821422 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821474 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821482 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821515 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821523 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: E0218 14:23:34.821551 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821559 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821803 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="sg-core" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821838 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-central-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821858 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="ceilometer-notification-agent" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.821885 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" containerName="proxy-httpd" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.824429 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.829784 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.830074 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.830282 4739 scope.go:117] "RemoveContainer" containerID="da0d2a4461fd0e93fa0d2f0206e6d723fcdf2469cbd26e7227c5dafa1b1a7b91" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.833998 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.886709 4739 scope.go:117] "RemoveContainer" containerID="7cb84333b58be15a2210f89adee22417614eb80e8146f3f7e40e5b59e3acec24" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971334 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971401 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.971815 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.972137 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.972257 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:34 crc kubenswrapper[4739]: I0218 14:23:34.972313 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.068269 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.068389 4739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.074895 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.074951 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075021 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075168 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.075255 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.076021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.076321 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.080119 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.080615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.080911 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.083796 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.098043 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"ceilometer-0\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.133079 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.159134 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.719145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.767824 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c35bd35d-d228-4223-a207-ea164d0c6b23","Type":"ContainerStarted","Data":"9f60772161d540c8a5dbc2f3da2dba7aa06904c14b59606d6384b3d8ee20a2c1"} Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.769604 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.771027 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"633d577ca0d7c26b5d575a55a4d77d6216b341dedf226f7656b21d39f19c64e4"} Feb 18 14:23:35 crc kubenswrapper[4739]: I0218 14:23:35.803342 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.803315046 podStartE2EDuration="2.803315046s" podCreationTimestamp="2026-02-18 14:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:35.788756721 +0000 UTC m=+1448.284477663" watchObservedRunningTime="2026-02-18 14:23:35.803315046 +0000 UTC m=+1448.299035968" Feb 18 14:23:36 crc kubenswrapper[4739]: I0218 14:23:36.429302 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40" path="/var/lib/kubelet/pods/b2ac41c2-2fc5-4793-9124-7b4e2f6a2b40/volumes" Feb 18 14:23:36 crc kubenswrapper[4739]: I0218 14:23:36.782241 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294"} Feb 18 14:23:37 crc kubenswrapper[4739]: I0218 14:23:37.796544 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956"} Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.682720 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.684972 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.699176 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.724481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.730016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.757620 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.766178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.766286 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.781216 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.814821 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271"} Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.868824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.868989 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.869132 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.869208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.869583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.888826 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"aodh-db-create-zmb2f\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.971763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.973019 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.973892 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:38 crc kubenswrapper[4739]: I0218 14:23:38.990095 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"aodh-55b1-account-create-update-rl2bd\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.035600 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.046773 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.294580 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.709805 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:23:39 crc kubenswrapper[4739]: W0218 14:23:39.716902 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7351c0c9_c9c1_474c_a9cc_cde24bd45dfa.slice/crio-37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80 WatchSource:0}: Error finding container 37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80: Status 404 returned error can't find the container with id 37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80 Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.826501 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-55b1-account-create-update-rl2bd" event={"ID":"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa","Type":"ContainerStarted","Data":"37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80"} Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.889576 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.929719 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.932266 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.935913 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.936163 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 14:23:39 crc kubenswrapper[4739]: I0218 14:23:39.945736 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.120495 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.120998 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.121136 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.121524 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.223914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.224043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.224075 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.224109 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.253156 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.260251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.264218 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.376780 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"nova-cell0-cell-mapping-ldxnr\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.407901 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.409704 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.486607 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.525524 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.541822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.541898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.541987 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.542343 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.635745 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.638551 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.645179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.645613 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.645781 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.666152 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.666702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.681685 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.733996 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.752897 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.752994 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.753095 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.818864 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"nova-scheduler-0\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.836096 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.920119 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.920364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.928231 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.937355 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zmb2f" event={"ID":"4445c84e-2108-44e0-a46e-673fe0858df3","Type":"ContainerStarted","Data":"3ed2d01779e3f9f2f1a7f3657c8ea7e0c04a12e2297ea7cab5002b17b30a7120"} Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.944635 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.950153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.971140 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"nova-cell1-novncproxy-0\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:40 crc kubenswrapper[4739]: I0218 14:23:40.973545 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.989849 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.995603 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.995645 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerStarted","Data":"c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f"} Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:40.995759 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.022016 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.093458 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133171 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133553 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.133730 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.157558 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.160888 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.166309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.215507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238208 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238299 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238350 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.238400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.240161 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.242892 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.242941 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.243149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.243186 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.258593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.261964 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.291245408 podStartE2EDuration="7.261939375s" podCreationTimestamp="2026-02-18 14:23:34 +0000 UTC" firstStartedPulling="2026-02-18 14:23:35.724671321 +0000 UTC m=+1448.220392243" lastFinishedPulling="2026-02-18 14:23:39.695365288 +0000 UTC m=+1452.191086210" observedRunningTime="2026-02-18 14:23:41.049993363 +0000 UTC m=+1453.545714295" watchObservedRunningTime="2026-02-18 14:23:41.261939375 +0000 UTC m=+1453.757660297" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.284698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.318143 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"nova-api-0\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.351615 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364305 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364767 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364790 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.364925 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.365979 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.366607 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.373262 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.380836 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.381947 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.394397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"nova-metadata-0\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474118 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474394 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.474516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.559073 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.571455 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.579579 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.579875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.580076 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.580206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581522 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581254 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581255 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.581127 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.582137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.582424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.584151 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.608395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"dnsmasq-dns-568d7fd7cf-qmxqt\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.745870 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.834512 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:41 crc kubenswrapper[4739]: I0218 14:23:41.858871 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.060611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerStarted","Data":"1e4203ffbb72f10f3e23eeb7b58aca4644efc86b96e25b7947b3e87de9a09564"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.069827 4739 generic.go:334] "Generic (PLEG): container finished" podID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerID="633345116a43d3ca8fa44023cd81269b98b8fe89948eab70d0c8a2b4002309e9" exitCode=0 Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.069929 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-55b1-account-create-update-rl2bd" event={"ID":"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa","Type":"ContainerDied","Data":"633345116a43d3ca8fa44023cd81269b98b8fe89948eab70d0c8a2b4002309e9"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.078805 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerStarted","Data":"a38d14b23155387f49e9e35f9e4c0f5e1fafb41bc41b4bda60fd5f970734778d"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.091925 4739 generic.go:334] "Generic (PLEG): container finished" podID="4445c84e-2108-44e0-a46e-673fe0858df3" containerID="67951a3352fb939ea45b17ca75ec53a682c20dd4d63961be0be0da15f32b4807" exitCode=0 Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.092260 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zmb2f" event={"ID":"4445c84e-2108-44e0-a46e-673fe0858df3","Type":"ContainerDied","Data":"67951a3352fb939ea45b17ca75ec53a682c20dd4d63961be0be0da15f32b4807"} Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.223178 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.631595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:42 crc kubenswrapper[4739]: I0218 14:23:42.697813 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.011342 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.027509 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.031820 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.041466 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.080206 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.123162 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:23:43 crc kubenswrapper[4739]: W0218 14:23:43.142676 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3e9cc3_348e_4556_89a2_ea261dd47147.slice/crio-3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf WatchSource:0}: Error finding container 3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf: Status 404 returned error can't find the container with id 3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.143011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerStarted","Data":"5e425dc81372bc58ea5a732a114720e008b75f79e58f406fbae181589aeba1b6"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.157704 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerStarted","Data":"c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.172621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerStarted","Data":"64030588e2930d3d06f331c679500514142af233ae50cdca79cac3e5508cd8e1"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179221 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179321 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179352 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.179377 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.189377 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerStarted","Data":"6e2df7ee9b43e8c8d150f1e10f74fbb1b12aa869992bbc9978302cfef895fb90"} Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.189978 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ldxnr" podStartSLOduration=4.189959 podStartE2EDuration="4.189959s" podCreationTimestamp="2026-02-18 14:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:43.189550469 +0000 UTC m=+1455.685271401" watchObservedRunningTime="2026-02-18 14:23:43.189959 +0000 UTC m=+1455.685679922" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282234 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282368 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.282389 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.291710 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.295012 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.297915 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.327108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"nova-cell1-conductor-db-sync-7d9ft\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.422249 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.807118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.870635 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:23:43 crc kubenswrapper[4739]: E0218 14:23:43.871606 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerName="mariadb-account-create-update" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.871696 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerName="mariadb-account-create-update" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.872072 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" containerName="mariadb-account-create-update" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.876286 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.922993 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") pod \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.923690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") pod \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\" (UID: \"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa\") " Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.923845 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" (UID: "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.924508 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.927865 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:23:43 crc kubenswrapper[4739]: I0218 14:23:43.931224 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8" (OuterVolumeSpecName: "kube-api-access-rmth8") pod "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" (UID: "7351c0c9-c9c1-474c-a9cc-cde24bd45dfa"). InnerVolumeSpecName "kube-api-access-rmth8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.027279 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.027492 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.027595 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.028038 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmth8\" (UniqueName: \"kubernetes.io/projected/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa-kube-api-access-rmth8\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.130151 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.130350 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.130410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.131088 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.132856 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.159476 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.188207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"redhat-operators-wg5zz\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.232821 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.233237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") pod \"4445c84e-2108-44e0-a46e-673fe0858df3\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.233593 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") pod \"4445c84e-2108-44e0-a46e-673fe0858df3\" (UID: \"4445c84e-2108-44e0-a46e-673fe0858df3\") " Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.236248 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4445c84e-2108-44e0-a46e-673fe0858df3" (UID: "4445c84e-2108-44e0-a46e-673fe0858df3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.237949 4739 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4445c84e-2108-44e0-a46e-673fe0858df3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.253033 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-zmb2f" event={"ID":"4445c84e-2108-44e0-a46e-673fe0858df3","Type":"ContainerDied","Data":"3ed2d01779e3f9f2f1a7f3657c8ea7e0c04a12e2297ea7cab5002b17b30a7120"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.253120 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed2d01779e3f9f2f1a7f3657c8ea7e0c04a12e2297ea7cab5002b17b30a7120" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.253174 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-zmb2f" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.255170 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m" (OuterVolumeSpecName: "kube-api-access-lrd2m") pod "4445c84e-2108-44e0-a46e-673fe0858df3" (UID: "4445c84e-2108-44e0-a46e-673fe0858df3"). InnerVolumeSpecName "kube-api-access-lrd2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.269261 4739 generic.go:334] "Generic (PLEG): container finished" podID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerID="21d6c1252de616814b74822ec06612c09a85d4a3dc10b578fb97435ea22e69d8" exitCode=0 Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.269394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerDied","Data":"21d6c1252de616814b74822ec06612c09a85d4a3dc10b578fb97435ea22e69d8"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.269553 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerStarted","Data":"3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.307998 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-55b1-account-create-update-rl2bd" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.308304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-55b1-account-create-update-rl2bd" event={"ID":"7351c0c9-c9c1-474c-a9cc-cde24bd45dfa","Type":"ContainerDied","Data":"37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80"} Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.308363 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b22c12f9cec405f129a6839eee3abcd2d4cbf9acafa151390069a06d61eb80" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.351504 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrd2m\" (UniqueName: \"kubernetes.io/projected/4445c84e-2108-44e0-a46e-673fe0858df3-kube-api-access-lrd2m\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.445544 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:23:44 crc kubenswrapper[4739]: I0218 14:23:44.963824 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.012744 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.057260 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:23:45 crc kubenswrapper[4739]: W0218 14:23:45.082371 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bbaed51_382b_4b1b_8b3f_95521f415472.slice/crio-8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052 WatchSource:0}: Error finding container 8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052: Status 404 returned error can't find the container with id 8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052 Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.342782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerStarted","Data":"f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.343050 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerStarted","Data":"a2ff715f6687dcb420415366f7ad28d9ba10898b955268123d1a60c93c36a991"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.413059 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerStarted","Data":"8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.415184 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" podStartSLOduration=3.415165076 podStartE2EDuration="3.415165076s" podCreationTimestamp="2026-02-18 14:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:45.402160469 +0000 UTC m=+1457.897881391" watchObservedRunningTime="2026-02-18 14:23:45.415165076 +0000 UTC m=+1457.910885998" Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.439808 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerStarted","Data":"94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc"} Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.440771 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:45 crc kubenswrapper[4739]: I0218 14:23:45.477674 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" podStartSLOduration=4.477647635 podStartE2EDuration="4.477647635s" podCreationTimestamp="2026-02-18 14:23:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:45.461820807 +0000 UTC m=+1457.957541729" watchObservedRunningTime="2026-02-18 14:23:45.477647635 +0000 UTC m=+1457.973368557" Feb 18 14:23:46 crc kubenswrapper[4739]: I0218 14:23:46.465420 4739 generic.go:334] "Generic (PLEG): container finished" podID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerID="6869795123dd672f097b8cf90d0e5e277663d03ea727ac622ba0a62b525526df" exitCode=0 Feb 18 14:23:46 crc kubenswrapper[4739]: I0218 14:23:46.465545 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"6869795123dd672f097b8cf90d0e5e277663d03ea727ac622ba0a62b525526df"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.505150 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerStarted","Data":"270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.538738 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerStarted","Data":"c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.547098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerStarted","Data":"4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543"} Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.547230 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543" gracePeriod=30 Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.551954 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.417516641 podStartE2EDuration="8.551934942s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:41.847703754 +0000 UTC m=+1454.343424676" lastFinishedPulling="2026-02-18 14:23:47.982122055 +0000 UTC m=+1460.477842977" observedRunningTime="2026-02-18 14:23:48.524938014 +0000 UTC m=+1461.020658936" watchObservedRunningTime="2026-02-18 14:23:48.551934942 +0000 UTC m=+1461.047655864" Feb 18 14:23:48 crc kubenswrapper[4739]: I0218 14:23:48.578162 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.766425673 podStartE2EDuration="8.57814112s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:42.231540713 +0000 UTC m=+1454.727261635" lastFinishedPulling="2026-02-18 14:23:48.04325616 +0000 UTC m=+1460.538977082" observedRunningTime="2026-02-18 14:23:48.567652087 +0000 UTC m=+1461.063373039" watchObservedRunningTime="2026-02-18 14:23:48.57814112 +0000 UTC m=+1461.073862042" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.348304 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:23:49 crc kubenswrapper[4739]: E0218 14:23:49.351055 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" containerName="mariadb-database-create" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.351084 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" containerName="mariadb-database-create" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.351409 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" containerName="mariadb-database-create" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.353335 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.356834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.357029 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.357825 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.358126 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.361677 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.517808 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.517963 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.518111 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.518146 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.577182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerStarted","Data":"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.577231 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerStarted","Data":"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.579145 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerStarted","Data":"cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.579218 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" containerID="cri-o://c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3" gracePeriod=30 Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.579523 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" containerID="cri-o://cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0" gracePeriod=30 Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.583912 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerStarted","Data":"0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158"} Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.614638 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.228886767 podStartE2EDuration="9.614619207s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:42.655091319 +0000 UTC m=+1455.150812241" lastFinishedPulling="2026-02-18 14:23:48.040823739 +0000 UTC m=+1460.536544681" observedRunningTime="2026-02-18 14:23:49.593998869 +0000 UTC m=+1462.089719811" watchObservedRunningTime="2026-02-18 14:23:49.614619207 +0000 UTC m=+1462.110340129" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.619674 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.619865 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.619962 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.620252 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.623965 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.273812395 podStartE2EDuration="9.623943721s" podCreationTimestamp="2026-02-18 14:23:40 +0000 UTC" firstStartedPulling="2026-02-18 14:23:42.632486521 +0000 UTC m=+1455.128207443" lastFinishedPulling="2026-02-18 14:23:47.982617857 +0000 UTC m=+1460.478338769" observedRunningTime="2026-02-18 14:23:49.622275749 +0000 UTC m=+1462.117996671" watchObservedRunningTime="2026-02-18 14:23:49.623943721 +0000 UTC m=+1462.119664663" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.636297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.638008 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.638354 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.645392 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"aodh-db-sync-xg8g2\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:49 crc kubenswrapper[4739]: I0218 14:23:49.674328 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.436911 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607271 4739 generic.go:334] "Generic (PLEG): container finished" podID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerID="cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0" exitCode=0 Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607530 4739 generic.go:334] "Generic (PLEG): container finished" podID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerID="c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3" exitCode=143 Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607573 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerDied","Data":"cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0"} Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.607596 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerDied","Data":"c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3"} Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.609940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerStarted","Data":"36831a1e37f2b21d3c3aead0d2ccb7ab0dbd8dd55f9fcd39a7a0f41c0dec9ba6"} Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.837362 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.837547 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.876354 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.930253 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:50 crc kubenswrapper[4739]: I0218 14:23:50.992652 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.060289 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.060823 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.060948 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.061009 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") pod \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\" (UID: \"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1\") " Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.061225 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs" (OuterVolumeSpecName: "logs") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.063184 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.084375 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2" (OuterVolumeSpecName: "kube-api-access-tztg2") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "kube-api-access-tztg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.155941 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.165175 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.165209 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tztg2\" (UniqueName: \"kubernetes.io/projected/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-kube-api-access-tztg2\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.169043 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data" (OuterVolumeSpecName: "config-data") pod "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" (UID: "e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.272362 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.561813 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.561877 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.632835 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1","Type":"ContainerDied","Data":"6e2df7ee9b43e8c8d150f1e10f74fbb1b12aa869992bbc9978302cfef895fb90"} Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.632900 4739 scope.go:117] "RemoveContainer" containerID="cd4f224eda5c86f0e3784e45d9715568dac8dfc7c31367362a6e2989121137c0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.633074 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.689483 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.703861 4739 scope.go:117] "RemoveContainer" containerID="c27e75478b8aac6ce642cba868d17695b7ae39c02c2bb372e6c68d1a092137a3" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.724062 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.747562 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.750251 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.753585 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: E0218 14:23:51.754234 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754267 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" Feb 18 14:23:51 crc kubenswrapper[4739]: E0218 14:23:51.754315 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754324 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754667 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-log" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.754690 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" containerName="nova-metadata-metadata" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.756965 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.760115 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.760225 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.775377 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.866982 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.880347 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" containerID="cri-o://38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180" gracePeriod=10 Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889470 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889515 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.889604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.991378 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.995991 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:51 crc kubenswrapper[4739]: I0218 14:23:51.996134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.000365 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.014343 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.017976 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"nova-metadata-0\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.087199 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.432135 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1" path="/var/lib/kubelet/pods/e601f3ae-4b9b-4373-85e5-d55c2eb7c8c1/volumes" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.644733 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.645390 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.242:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.670554 4739 generic.go:334] "Generic (PLEG): container finished" podID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerID="38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180" exitCode=0 Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.670619 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerDied","Data":"38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180"} Feb 18 14:23:52 crc kubenswrapper[4739]: W0218 14:23:52.778343 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba4c65b2_a3f9_446e_9807_bb2290d04b87.slice/crio-fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7 WatchSource:0}: Error finding container fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7: Status 404 returned error can't find the container with id fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7 Feb 18 14:23:52 crc kubenswrapper[4739]: I0218 14:23:52.780117 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.698394 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerStarted","Data":"7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1"} Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.698991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerStarted","Data":"fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7"} Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.700738 4739 generic.go:334] "Generic (PLEG): container finished" podID="5f44227f-28d1-4aaf-9133-c4560b893022" containerID="c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355" exitCode=0 Feb 18 14:23:53 crc kubenswrapper[4739]: I0218 14:23:53.700783 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerDied","Data":"c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355"} Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.450897 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.471147 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557016 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557071 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557173 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557365 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") pod \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\" (UID: \"496019f4-ba1f-40a6-9cff-bf7bd8dfee51\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557724 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.557848 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") pod \"5f44227f-28d1-4aaf-9133-c4560b893022\" (UID: \"5f44227f-28d1-4aaf-9133-c4560b893022\") " Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.566105 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5" (OuterVolumeSpecName: "kube-api-access-zg4n5") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "kube-api-access-zg4n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.566810 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts" (OuterVolumeSpecName: "scripts") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.568671 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx" (OuterVolumeSpecName: "kube-api-access-mwvtx") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "kube-api-access-mwvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.619267 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.635886 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data" (OuterVolumeSpecName: "config-data") pod "5f44227f-28d1-4aaf-9133-c4560b893022" (UID: "5f44227f-28d1-4aaf-9133-c4560b893022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.654939 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.656248 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.659041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config" (OuterVolumeSpecName: "config") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661385 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661416 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg4n5\" (UniqueName: \"kubernetes.io/projected/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-kube-api-access-zg4n5\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661429 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661438 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661463 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661472 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwvtx\" (UniqueName: \"kubernetes.io/projected/5f44227f-28d1-4aaf-9133-c4560b893022-kube-api-access-mwvtx\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661480 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.661488 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f44227f-28d1-4aaf-9133-c4560b893022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.664764 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.682379 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "496019f4-ba1f-40a6-9cff-bf7bd8dfee51" (UID: "496019f4-ba1f-40a6-9cff-bf7bd8dfee51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.752558 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ldxnr" event={"ID":"5f44227f-28d1-4aaf-9133-c4560b893022","Type":"ContainerDied","Data":"a38d14b23155387f49e9e35f9e4c0f5e1fafb41bc41b4bda60fd5f970734778d"} Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.752589 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ldxnr" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.752605 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a38d14b23155387f49e9e35f9e4c0f5e1fafb41bc41b4bda60fd5f970734778d" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.755989 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" event={"ID":"496019f4-ba1f-40a6-9cff-bf7bd8dfee51","Type":"ContainerDied","Data":"6ad816951b3fbde1a7196efd13d5a85b80b684bb992e88915048b9d53fd1030f"} Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.756048 4739 scope.go:117] "RemoveContainer" containerID="38483feafbc06f3f1617bba16dbce12f0da5c76ff8f6d9cf24f5ec57e0763180" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.756085 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.764324 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.764580 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/496019f4-ba1f-40a6-9cff-bf7bd8dfee51-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.798324 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.798798 4739 scope.go:117] "RemoveContainer" containerID="8b70db3067c947ac9fe93c9c738cc56e4ed6885f9ff81677596f72e6844d09b7" Feb 18 14:23:57 crc kubenswrapper[4739]: I0218 14:23:57.811118 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-qh25b"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.428411 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" path="/var/lib/kubelet/pods/496019f4-ba1f-40a6-9cff-bf7bd8dfee51/volumes" Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.584807 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.585055 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" containerID="cri-o://270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f" gracePeriod=30 Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.606198 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.606503 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" containerID="cri-o://149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" gracePeriod=30 Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.606665 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" containerID="cri-o://7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" gracePeriod=30 Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.642648 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.770326 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerStarted","Data":"82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce"} Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.773392 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerStarted","Data":"52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631"} Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.796352 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.796327445 podStartE2EDuration="7.796327445s" podCreationTimestamp="2026-02-18 14:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:23:58.78973199 +0000 UTC m=+1471.285452922" watchObservedRunningTime="2026-02-18 14:23:58.796327445 +0000 UTC m=+1471.292048367" Feb 18 14:23:58 crc kubenswrapper[4739]: I0218 14:23:58.823308 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-xg8g2" podStartSLOduration=3.052190496 podStartE2EDuration="9.823292061s" podCreationTimestamp="2026-02-18 14:23:49 +0000 UTC" firstStartedPulling="2026-02-18 14:23:50.439662945 +0000 UTC m=+1462.935383867" lastFinishedPulling="2026-02-18 14:23:57.21076451 +0000 UTC m=+1469.706485432" observedRunningTime="2026-02-18 14:23:58.82087297 +0000 UTC m=+1471.316593892" watchObservedRunningTime="2026-02-18 14:23:58.823292061 +0000 UTC m=+1471.319012983" Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.373128 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.373643 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.788380 4739 generic.go:334] "Generic (PLEG): container finished" podID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerID="270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f" exitCode=0 Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.788765 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerDied","Data":"270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f"} Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791328 4739 generic.go:334] "Generic (PLEG): container finished" podID="69e98338-825d-4f76-833c-2e1ea807d942" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" exitCode=143 Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791391 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerDied","Data":"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944"} Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791772 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" containerID="cri-o://7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1" gracePeriod=30 Feb 18 14:23:59 crc kubenswrapper[4739]: I0218 14:23:59.791837 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" containerID="cri-o://82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce" gracePeriod=30 Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.000308 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.126237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") pod \"60a63b94-9b6f-4117-bd43-e7c7986f3824\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.126463 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") pod \"60a63b94-9b6f-4117-bd43-e7c7986f3824\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.126586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") pod \"60a63b94-9b6f-4117-bd43-e7c7986f3824\" (UID: \"60a63b94-9b6f-4117-bd43-e7c7986f3824\") " Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.131619 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw" (OuterVolumeSpecName: "kube-api-access-nnmrw") pod "60a63b94-9b6f-4117-bd43-e7c7986f3824" (UID: "60a63b94-9b6f-4117-bd43-e7c7986f3824"). InnerVolumeSpecName "kube-api-access-nnmrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.159559 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60a63b94-9b6f-4117-bd43-e7c7986f3824" (UID: "60a63b94-9b6f-4117-bd43-e7c7986f3824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.160706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data" (OuterVolumeSpecName: "config-data") pod "60a63b94-9b6f-4117-bd43-e7c7986f3824" (UID: "60a63b94-9b6f-4117-bd43-e7c7986f3824"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.229539 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.229785 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnmrw\" (UniqueName: \"kubernetes.io/projected/60a63b94-9b6f-4117-bd43-e7c7986f3824-kube-api-access-nnmrw\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.229795 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60a63b94-9b6f-4117-bd43-e7c7986f3824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.369841 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba4c65b2_a3f9_446e_9807_bb2290d04b87.slice/crio-conmon-82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806026 4739 generic.go:334] "Generic (PLEG): container finished" podID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerID="82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce" exitCode=0 Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806335 4739 generic.go:334] "Generic (PLEG): container finished" podID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerID="7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1" exitCode=143 Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806079 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerDied","Data":"82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce"} Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.806426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerDied","Data":"7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1"} Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.808133 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"60a63b94-9b6f-4117-bd43-e7c7986f3824","Type":"ContainerDied","Data":"1e4203ffbb72f10f3e23eeb7b58aca4644efc86b96e25b7947b3e87de9a09564"} Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.808181 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.808236 4739 scope.go:117] "RemoveContainer" containerID="270c5492dac27b54d8ea38736f097fb276dfdd13b5159fe0a400f376b6d5be8f" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.839521 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.875501 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.896850 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897613 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897639 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="init" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897649 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="init" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897694 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" containerName="nova-manage" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897704 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" containerName="nova-manage" Feb 18 14:24:00 crc kubenswrapper[4739]: E0218 14:24:00.897719 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.897727 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.898077 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" containerName="nova-manage" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.898108 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" containerName="nova-scheduler-scheduler" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.898136 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.899326 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.907756 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:24:00 crc kubenswrapper[4739]: I0218 14:24:00.915882 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.002207 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.055017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.055109 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.055214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157576 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157610 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.157879 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") pod \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\" (UID: \"ba4c65b2-a3f9-446e-9807-bb2290d04b87\") " Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158407 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158429 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs" (OuterVolumeSpecName: "logs") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158514 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158614 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.158827 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba4c65b2-a3f9-446e-9807-bb2290d04b87-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.168861 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.169140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs" (OuterVolumeSpecName: "kube-api-access-xvtvs") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "kube-api-access-xvtvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.182142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.202691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"nova-scheduler-0\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.264722 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvtvs\" (UniqueName: \"kubernetes.io/projected/ba4c65b2-a3f9-446e-9807-bb2290d04b87-kube-api-access-xvtvs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.266609 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data" (OuterVolumeSpecName: "config-data") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.318240 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.342249 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-qh25b" podUID="496019f4-ba1f-40a6-9cff-bf7bd8dfee51" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.214:5353: i/o timeout" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.360605 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.375734 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.375765 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.381846 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba4c65b2-a3f9-446e-9807-bb2290d04b87" (UID: "ba4c65b2-a3f9-446e-9807-bb2290d04b87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.478802 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4c65b2-a3f9-446e-9807-bb2290d04b87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.832570 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba4c65b2-a3f9-446e-9807-bb2290d04b87","Type":"ContainerDied","Data":"fcd65c20afbc350c9d61b1093485245bbb865573428868ea44d2b6e0456a72d7"} Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.832638 4739 scope.go:117] "RemoveContainer" containerID="82f1e839ca8b116ac9b7ba250c8e511e21faf7a0f68a245046873b08506772ce" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.832634 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.842908 4739 generic.go:334] "Generic (PLEG): container finished" podID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerID="0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158" exitCode=0 Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.842945 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158"} Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.907715 4739 scope.go:117] "RemoveContainer" containerID="7acadbcf2178ed421b528315fa4ae13bf1f80d7851ac1bb187d53db89de360f1" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.942880 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.964538 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.978509 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.991119 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:01 crc kubenswrapper[4739]: E0218 14:24:01.991800 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.991826 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" Feb 18 14:24:01 crc kubenswrapper[4739]: E0218 14:24:01.991889 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.991898 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.992234 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-metadata" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.992265 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" containerName="nova-metadata-log" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.994277 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.996272 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:24:01 crc kubenswrapper[4739]: I0218 14:24:01.996271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.017896 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.095956 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096067 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096089 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096128 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.096432 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198675 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198846 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198950 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.198984 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.199037 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.199640 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.202896 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.203532 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.203547 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.221745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"nova-metadata-0\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.428687 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a63b94-9b6f-4117-bd43-e7c7986f3824" path="/var/lib/kubelet/pods/60a63b94-9b6f-4117-bd43-e7c7986f3824/volumes" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.429755 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba4c65b2-a3f9-446e-9807-bb2290d04b87" path="/var/lib/kubelet/pods/ba4c65b2-a3f9-446e-9807-bb2290d04b87/volumes" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.515864 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.818246 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.873493 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerStarted","Data":"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.873537 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerStarted","Data":"55bf56fc29bc6c5c7c73f1b370236bcbca1545fe9a2d06fed65e1f34bd49bd9b"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.875602 4739 generic.go:334] "Generic (PLEG): container finished" podID="1543620e-d684-4634-ba89-662f02f2b0e4" containerID="52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631" exitCode=0 Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.875675 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerDied","Data":"52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878276 4739 generic.go:334] "Generic (PLEG): container finished" podID="69e98338-825d-4f76-833c-2e1ea807d942" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" exitCode=0 Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878379 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878367 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerDied","Data":"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878476 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69e98338-825d-4f76-833c-2e1ea807d942","Type":"ContainerDied","Data":"64030588e2930d3d06f331c679500514142af233ae50cdca79cac3e5508cd8e1"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.878493 4739 scope.go:117] "RemoveContainer" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.888305 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerStarted","Data":"efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec"} Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.891872 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8918542350000003 podStartE2EDuration="2.891854235s" podCreationTimestamp="2026-02-18 14:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:02.886994803 +0000 UTC m=+1475.382715725" watchObservedRunningTime="2026-02-18 14:24:02.891854235 +0000 UTC m=+1475.387575157" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.920419 4739 scope.go:117] "RemoveContainer" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921242 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921322 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921369 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.921575 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") pod \"69e98338-825d-4f76-833c-2e1ea807d942\" (UID: \"69e98338-825d-4f76-833c-2e1ea807d942\") " Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.922753 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs" (OuterVolumeSpecName: "logs") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.922911 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69e98338-825d-4f76-833c-2e1ea807d942-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.926955 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg" (OuterVolumeSpecName: "kube-api-access-rg8gg") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "kube-api-access-rg8gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.949667 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wg5zz" podStartSLOduration=4.420407398 podStartE2EDuration="19.949645416s" podCreationTimestamp="2026-02-18 14:23:43 +0000 UTC" firstStartedPulling="2026-02-18 14:23:46.828064035 +0000 UTC m=+1459.323784947" lastFinishedPulling="2026-02-18 14:24:02.357302043 +0000 UTC m=+1474.853022965" observedRunningTime="2026-02-18 14:24:02.934778863 +0000 UTC m=+1475.430499795" watchObservedRunningTime="2026-02-18 14:24:02.949645416 +0000 UTC m=+1475.445366338" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.977645 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:02 crc kubenswrapper[4739]: I0218 14:24:02.991982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data" (OuterVolumeSpecName: "config-data") pod "69e98338-825d-4f76-833c-2e1ea807d942" (UID: "69e98338-825d-4f76-833c-2e1ea807d942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.001305 4739 scope.go:117] "RemoveContainer" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.001918 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459\": container with ID starting with 7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459 not found: ID does not exist" containerID="7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.001975 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459"} err="failed to get container status \"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459\": rpc error: code = NotFound desc = could not find container \"7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459\": container with ID starting with 7c4773ea3d5d5d060e341578066491ddcfb5aedd0863b9224978cbb359604459 not found: ID does not exist" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.001997 4739 scope.go:117] "RemoveContainer" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.002232 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944\": container with ID starting with 149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944 not found: ID does not exist" containerID="149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.002254 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944"} err="failed to get container status \"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944\": rpc error: code = NotFound desc = could not find container \"149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944\": container with ID starting with 149f1dd0ebc6db5dacc34452a7a9b969e10ad2dfea873518b9f7dd7584aab944 not found: ID does not exist" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.025487 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.025517 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e98338-825d-4f76-833c-2e1ea807d942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.025529 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg8gg\" (UniqueName: \"kubernetes.io/projected/69e98338-825d-4f76-833c-2e1ea807d942-kube-api-access-rg8gg\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:03 crc kubenswrapper[4739]: W0218 14:24:03.039772 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb3f59c_d6e1_4eb7_ad1d_75644646a2f9.slice/crio-bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc WatchSource:0}: Error finding container bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc: Status 404 returned error can't find the container with id bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.044299 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.241599 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.261920 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.293826 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.294543 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.294566 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" Feb 18 14:24:03 crc kubenswrapper[4739]: E0218 14:24:03.294616 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.294627 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.294991 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-log" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.295016 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e98338-825d-4f76-833c-2e1ea807d942" containerName="nova-api-api" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.297318 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.302842 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.309411 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363461 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363674 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.363689 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465308 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465538 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465557 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.465835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.469243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.470661 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.483125 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"nova-api-0\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.626774 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.919166 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerStarted","Data":"82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe"} Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.919543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerStarted","Data":"9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176"} Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.919560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerStarted","Data":"bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc"} Feb 18 14:24:03 crc kubenswrapper[4739]: I0218 14:24:03.959728 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.95970381 podStartE2EDuration="2.95970381s" podCreationTimestamp="2026-02-18 14:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:03.945068702 +0000 UTC m=+1476.440789644" watchObservedRunningTime="2026-02-18 14:24:03.95970381 +0000 UTC m=+1476.455424732" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.143612 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.382613 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.433094 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e98338-825d-4f76-833c-2e1ea807d942" path="/var/lib/kubelet/pods/69e98338-825d-4f76-833c-2e1ea807d942/volumes" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.447183 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.447222 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508288 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508432 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508564 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.508583 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") pod \"1543620e-d684-4634-ba89-662f02f2b0e4\" (UID: \"1543620e-d684-4634-ba89-662f02f2b0e4\") " Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.525800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts" (OuterVolumeSpecName: "scripts") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.525841 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l" (OuterVolumeSpecName: "kube-api-access-hnk6l") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "kube-api-access-hnk6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.549548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.557608 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data" (OuterVolumeSpecName: "config-data") pod "1543620e-d684-4634-ba89-662f02f2b0e4" (UID: "1543620e-d684-4634-ba89-662f02f2b0e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611382 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnk6l\" (UniqueName: \"kubernetes.io/projected/1543620e-d684-4634-ba89-662f02f2b0e4-kube-api-access-hnk6l\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611430 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611465 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.611480 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1543620e-d684-4634-ba89-662f02f2b0e4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.946974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-xg8g2" event={"ID":"1543620e-d684-4634-ba89-662f02f2b0e4","Type":"ContainerDied","Data":"36831a1e37f2b21d3c3aead0d2ccb7ab0dbd8dd55f9fcd39a7a0f41c0dec9ba6"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.947290 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36831a1e37f2b21d3c3aead0d2ccb7ab0dbd8dd55f9fcd39a7a0f41c0dec9ba6" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.947508 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-xg8g2" Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.951776 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerStarted","Data":"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.951813 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerStarted","Data":"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.951822 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerStarted","Data":"f9a2e2a20257041f47da0dff019617b0952ac1e5137c62cf8adc4e7b636524d9"} Feb 18 14:24:04 crc kubenswrapper[4739]: I0218 14:24:04.988038 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.9880145219999998 podStartE2EDuration="1.988014522s" podCreationTimestamp="2026-02-18 14:24:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:04.971138718 +0000 UTC m=+1477.466859650" watchObservedRunningTime="2026-02-18 14:24:04.988014522 +0000 UTC m=+1477.483735444" Feb 18 14:24:05 crc kubenswrapper[4739]: I0218 14:24:05.168960 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:24:05 crc kubenswrapper[4739]: I0218 14:24:05.571969 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:05 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:05 crc kubenswrapper[4739]: > Feb 18 14:24:06 crc kubenswrapper[4739]: I0218 14:24:06.320791 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:24:07 crc kubenswrapper[4739]: I0218 14:24:07.516247 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:07 crc kubenswrapper[4739]: I0218 14:24:07.516679 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.557082 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.557695 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" containerID="cri-o://854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f" gracePeriod=30 Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.570164 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: E0218 14:24:09.570689 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" containerName="aodh-db-sync" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.570709 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" containerName="aodh-db-sync" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.570964 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" containerName="aodh-db-sync" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.574154 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.576783 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.576937 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.577336 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.582379 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.674701 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.674913 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.674992 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.675047 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.726718 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.727353 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" containerID="cri-o://9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e" gracePeriod=30 Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.777757 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.777968 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.778036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.778083 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.784694 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.785816 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.794830 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.795754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"aodh-0\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " pod="openstack/aodh-0" Feb 18 14:24:09 crc kubenswrapper[4739]: I0218 14:24:09.899842 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.011890 4739 generic.go:334] "Generic (PLEG): container finished" podID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerID="854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f" exitCode=2 Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.011948 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerDied","Data":"854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f"} Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.013263 4739 generic.go:334] "Generic (PLEG): container finished" podID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerID="9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e" exitCode=2 Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.013281 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerDied","Data":"9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e"} Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.484622 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.510940 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") pod \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\" (UID: \"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d\") " Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.590692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6" (OuterVolumeSpecName: "kube-api-access-ndzf6") pod "1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" (UID: "1d9742cc-1407-4631-a6ba-55fe1cc3fe4d"). InnerVolumeSpecName "kube-api-access-ndzf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.638339 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzf6\" (UniqueName: \"kubernetes.io/projected/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d-kube-api-access-ndzf6\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.929600 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:10 crc kubenswrapper[4739]: I0218 14:24:10.941601 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.048026 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") pod \"4786d26d-b01e-4e3a-9407-81307b5a1433\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.048405 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") pod \"4786d26d-b01e-4e3a-9407-81307b5a1433\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.048452 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") pod \"4786d26d-b01e-4e3a-9407-81307b5a1433\" (UID: \"4786d26d-b01e-4e3a-9407-81307b5a1433\") " Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.060774 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1d9742cc-1407-4631-a6ba-55fe1cc3fe4d","Type":"ContainerDied","Data":"2bc5886939c37fb1062674e7d0eff4b81f7f7a7b2294e0f4745de8bbbca3ba11"} Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.060837 4739 scope.go:117] "RemoveContainer" containerID="854525aaeba0262ed326c20d6a5adb12a6f5a5f831c0eda717220f2304b4bf4f" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.061018 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.069814 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l" (OuterVolumeSpecName: "kube-api-access-2xn7l") pod "4786d26d-b01e-4e3a-9407-81307b5a1433" (UID: "4786d26d-b01e-4e3a-9407-81307b5a1433"). InnerVolumeSpecName "kube-api-access-2xn7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.076807 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4786d26d-b01e-4e3a-9407-81307b5a1433","Type":"ContainerDied","Data":"7802eb786f9fd65a5a871491a73453af4c3e9308ab2608296cd37aed4159f91a"} Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.077121 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.090560 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"8762dd17c92d0766d85297d3b8ff657afb0c476107270f6df46caae48fe9cee4"} Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.096870 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4786d26d-b01e-4e3a-9407-81307b5a1433" (UID: "4786d26d-b01e-4e3a-9407-81307b5a1433"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.151815 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.156339 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.156370 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xn7l\" (UniqueName: \"kubernetes.io/projected/4786d26d-b01e-4e3a-9407-81307b5a1433-kube-api-access-2xn7l\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.157071 4739 scope.go:117] "RemoveContainer" containerID="9182016155c2cfd3865f3579fd6250303c57c41f06d79e483e00d365f229195e" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.169839 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.195631 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: E0218 14:24:11.196379 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196398 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" Feb 18 14:24:11 crc kubenswrapper[4739]: E0218 14:24:11.196428 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196435 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196708 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" containerName="kube-state-metrics" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.196722 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" containerName="mysqld-exporter" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.197648 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.202838 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.202867 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.320081 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.361870 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.362299 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.362347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.362385 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4wn8\" (UniqueName: \"kubernetes.io/projected/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-api-access-m4wn8\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.375646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.463935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.463993 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4wn8\" (UniqueName: \"kubernetes.io/projected/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-api-access-m4wn8\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.464089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.464269 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.467820 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.468030 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.469657 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.472920 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.489927 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data" (OuterVolumeSpecName: "config-data") pod "4786d26d-b01e-4e3a-9407-81307b5a1433" (UID: "4786d26d-b01e-4e3a-9407-81307b5a1433"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.529500 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4wn8\" (UniqueName: \"kubernetes.io/projected/3e688eb1-895d-465e-b5d9-a7b7ba9f4650-kube-api-access-m4wn8\") pod \"kube-state-metrics-0\" (UID: \"3e688eb1-895d-465e-b5d9-a7b7ba9f4650\") " pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.567206 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4786d26d-b01e-4e3a-9407-81307b5a1433-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.795862 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.810853 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.814605 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.832009 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.833776 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.836237 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.836497 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875825 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-config-data\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875876 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875942 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.875973 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2cc\" (UniqueName: \"kubernetes.io/projected/8143c3df-5224-4095-a65f-f9f005913b61-kube-api-access-5t2cc\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.886622 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978648 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-config-data\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978758 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978852 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:11 crc kubenswrapper[4739]: I0218 14:24:11.978886 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2cc\" (UniqueName: \"kubernetes.io/projected/8143c3df-5224-4095-a65f-f9f005913b61-kube-api-access-5t2cc\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:11.995780 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.009061 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.020021 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8143c3df-5224-4095-a65f-f9f005913b61-config-data\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.022273 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2cc\" (UniqueName: \"kubernetes.io/projected/8143c3df-5224-4095-a65f-f9f005913b61-kube-api-access-5t2cc\") pod \"mysqld-exporter-0\" (UID: \"8143c3df-5224-4095-a65f-f9f005913b61\") " pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.139195 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.178368 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:24:12 crc kubenswrapper[4739]: W0218 14:24:12.414335 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e688eb1_895d_465e_b5d9_a7b7ba9f4650.slice/crio-2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e WatchSource:0}: Error finding container 2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e: Status 404 returned error can't find the container with id 2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.423337 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9742cc-1407-4631-a6ba-55fe1cc3fe4d" path="/var/lib/kubelet/pods/1d9742cc-1407-4631-a6ba-55fe1cc3fe4d/volumes" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.425167 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4786d26d-b01e-4e3a-9407-81307b5a1433" path="/var/lib/kubelet/pods/4786d26d-b01e-4e3a-9407-81307b5a1433/volumes" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.426204 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.516097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.516572 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.836373 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837014 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" containerID="cri-o://62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294" gracePeriod=30 Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837117 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" containerID="cri-o://c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f" gracePeriod=30 Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837181 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" containerID="cri-o://207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271" gracePeriod=30 Feb 18 14:24:12 crc kubenswrapper[4739]: I0218 14:24:12.837213 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" containerID="cri-o://3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956" gracePeriod=30 Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.124435 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271" exitCode=2 Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.124478 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271"} Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.126251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e688eb1-895d-465e-b5d9-a7b7ba9f4650","Type":"ContainerStarted","Data":"2b52ae6d206dbcb05111576f31650ab21da5ccd8ddb06594e34f48141096499e"} Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.535681 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.536212 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.627646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.627690 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.865369 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 14:24:13 crc kubenswrapper[4739]: I0218 14:24:13.883327 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.148082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159281 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f" exitCode=0 Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159313 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294" exitCode=0 Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.159466 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.160598 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8143c3df-5224-4095-a65f-f9f005913b61","Type":"ContainerStarted","Data":"dcec1e90c84500a3429b635b18aca2f1bf3f48cd9c5676bce48294227e9813df"} Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.710639 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:14 crc kubenswrapper[4739]: I0218 14:24:14.710644 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.183669 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerID="3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956" exitCode=0 Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.183964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956"} Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.499243 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:15 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:15 crc kubenswrapper[4739]: > Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.517477 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693529 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693712 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693808 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693881 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.693912 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.694044 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") pod \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\" (UID: \"1ea8be82-c714-4993-b2c0-7af4a7fde0d3\") " Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.695283 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.696342 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.700006 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55" (OuterVolumeSpecName: "kube-api-access-7bz55") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "kube-api-access-7bz55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.701221 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts" (OuterVolumeSpecName: "scripts") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.729380 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801487 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801528 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801540 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801552 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bz55\" (UniqueName: \"kubernetes.io/projected/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-kube-api-access-7bz55\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.801563 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.812647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.845858 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data" (OuterVolumeSpecName: "config-data") pod "1ea8be82-c714-4993-b2c0-7af4a7fde0d3" (UID: "1ea8be82-c714-4993-b2c0-7af4a7fde0d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.904322 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:15 crc kubenswrapper[4739]: I0218 14:24:15.904361 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea8be82-c714-4993-b2c0-7af4a7fde0d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.197406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1ea8be82-c714-4993-b2c0-7af4a7fde0d3","Type":"ContainerDied","Data":"633d577ca0d7c26b5d575a55a4d77d6216b341dedf226f7656b21d39f19c64e4"} Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.197917 4739 scope.go:117] "RemoveContainer" containerID="c29f84cb2f10dd5869ffc87617c8a9e99b5f1b7ab01e8f8f6bf9c1b7fd53866f" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.197421 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.200489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e688eb1-895d-465e-b5d9-a7b7ba9f4650","Type":"ContainerStarted","Data":"7e94e110933254a8f49a8743c9a2da7631a04ab7a4c23f8767ba001ebb44a0bd"} Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.200934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.231930 4739 scope.go:117] "RemoveContainer" containerID="207b5c8f173777a219abe5fab0d30f956acecb4b1b39cab55be3107b97540271" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.243478 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.072622869 podStartE2EDuration="5.243435028s" podCreationTimestamp="2026-02-18 14:24:11 +0000 UTC" firstStartedPulling="2026-02-18 14:24:12.780517737 +0000 UTC m=+1485.276238659" lastFinishedPulling="2026-02-18 14:24:14.951329896 +0000 UTC m=+1487.447050818" observedRunningTime="2026-02-18 14:24:16.222694346 +0000 UTC m=+1488.718415278" watchObservedRunningTime="2026-02-18 14:24:16.243435028 +0000 UTC m=+1488.739155960" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.267552 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.273799 4739 scope.go:117] "RemoveContainer" containerID="3b915056344632cea227fb084003510db6f28165dd95f87eeb8a41b39c07b956" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.282125 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.301328 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.301935 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.301952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.301980 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.301986 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.302010 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302016 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" Feb 18 14:24:16 crc kubenswrapper[4739]: E0218 14:24:16.302029 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302035 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302271 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-central-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302294 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="ceilometer-notification-agent" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302306 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="sg-core" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302315 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" containerName="proxy-httpd" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.302884 4739 scope.go:117] "RemoveContainer" containerID="62637c0c6e3d9aa6dd9a357d05be808f306c43132357509831c6c4276f035294" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.305180 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.307490 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.307690 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.307796 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.322857 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420645 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420746 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.420904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421066 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421473 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421805 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.421846 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.426438 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea8be82-c714-4993-b2c0-7af4a7fde0d3" path="/var/lib/kubelet/pods/1ea8be82-c714-4993-b2c0-7af4a7fde0d3/volumes" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524605 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524714 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524748 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524828 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524856 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524906 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.524967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.525181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.525593 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.530313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.530901 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.531964 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.532549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.534145 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.564193 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"ceilometer-0\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " pod="openstack/ceilometer-0" Feb 18 14:24:16 crc kubenswrapper[4739]: I0218 14:24:16.632071 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:17 crc kubenswrapper[4739]: I0218 14:24:17.213617 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"8143c3df-5224-4095-a65f-f9f005913b61","Type":"ContainerStarted","Data":"e607278779921b7daa7d5089f3d9fd4d4c9965b020122527d9641a5e0f0f5f29"} Feb 18 14:24:17 crc kubenswrapper[4739]: I0218 14:24:17.238718 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.274881284 podStartE2EDuration="6.238694097s" podCreationTimestamp="2026-02-18 14:24:11 +0000 UTC" firstStartedPulling="2026-02-18 14:24:13.871455371 +0000 UTC m=+1486.367176293" lastFinishedPulling="2026-02-18 14:24:15.835268184 +0000 UTC m=+1488.330989106" observedRunningTime="2026-02-18 14:24:17.230713646 +0000 UTC m=+1489.726434568" watchObservedRunningTime="2026-02-18 14:24:17.238694097 +0000 UTC m=+1489.734415029" Feb 18 14:24:19 crc kubenswrapper[4739]: I0218 14:24:19.247297 4739 generic.go:334] "Generic (PLEG): container finished" podID="60d51f11-fba7-4368-9665-198dca1f9adc" containerID="4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543" exitCode=137 Feb 18 14:24:19 crc kubenswrapper[4739]: I0218 14:24:19.247611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerDied","Data":"4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543"} Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.704865 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.756595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") pod \"60d51f11-fba7-4368-9665-198dca1f9adc\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.757212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") pod \"60d51f11-fba7-4368-9665-198dca1f9adc\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.757263 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") pod \"60d51f11-fba7-4368-9665-198dca1f9adc\" (UID: \"60d51f11-fba7-4368-9665-198dca1f9adc\") " Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.757507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.762268 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp" (OuterVolumeSpecName: "kube-api-access-vjqbp") pod "60d51f11-fba7-4368-9665-198dca1f9adc" (UID: "60d51f11-fba7-4368-9665-198dca1f9adc"). InnerVolumeSpecName "kube-api-access-vjqbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.800882 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60d51f11-fba7-4368-9665-198dca1f9adc" (UID: "60d51f11-fba7-4368-9665-198dca1f9adc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.801520 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data" (OuterVolumeSpecName: "config-data") pod "60d51f11-fba7-4368-9665-198dca1f9adc" (UID: "60d51f11-fba7-4368-9665-198dca1f9adc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.861648 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.861689 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjqbp\" (UniqueName: \"kubernetes.io/projected/60d51f11-fba7-4368-9665-198dca1f9adc-kube-api-access-vjqbp\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:20 crc kubenswrapper[4739]: I0218 14:24:20.861704 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d51f11-fba7-4368-9665-198dca1f9adc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.271915 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3"} Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.273312 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"60d51f11-fba7-4368-9665-198dca1f9adc","Type":"ContainerDied","Data":"5e425dc81372bc58ea5a732a114720e008b75f79e58f406fbae181589aeba1b6"} Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.273367 4739 scope.go:117] "RemoveContainer" containerID="4b60b38fea8ccc13c08f02fa56b81b4a343cc57d4a2683a068d2eaff684ca543" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.273992 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.275152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"3cb69177aa55275b8d9b6fef13b5aac13b6cdb36cddbb51be35d3b65d87e5c5e"} Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.365595 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.379364 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.394711 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: E0218 14:24:21.395417 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.395462 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.395790 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.396910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.399921 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.400138 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.400600 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.410105 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479229 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshvb\" (UniqueName: \"kubernetes.io/projected/ea00e513-02cf-4951-b9ec-50966f982142-kube-api-access-qshvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479382 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479422 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.479692 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshvb\" (UniqueName: \"kubernetes.io/projected/ea00e513-02cf-4951-b9ec-50966f982142-kube-api-access-qshvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583470 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.583689 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.592099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.592117 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.592980 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.594130 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea00e513-02cf-4951-b9ec-50966f982142-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.605959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshvb\" (UniqueName: \"kubernetes.io/projected/ea00e513-02cf-4951-b9ec-50966f982142-kube-api-access-qshvb\") pod \"nova-cell1-novncproxy-0\" (UID: \"ea00e513-02cf-4951-b9ec-50966f982142\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.721273 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:21 crc kubenswrapper[4739]: I0218 14:24:21.860356 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.030262 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.307185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec"} Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.312331 4739 generic.go:334] "Generic (PLEG): container finished" podID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerID="f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6" exitCode=0 Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.312374 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerDied","Data":"f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6"} Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.443919 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d51f11-fba7-4368-9665-198dca1f9adc" path="/var/lib/kubelet/pods/60d51f11-fba7-4368-9665-198dca1f9adc/volumes" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.523751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.533939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.537004 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:24:22 crc kubenswrapper[4739]: I0218 14:24:22.650642 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.429007 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.480894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.510022 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea00e513-02cf-4951-b9ec-50966f982142","Type":"ContainerStarted","Data":"4c11aae1340e8ea51386680a86fd23bee84c424184f4f4a1a025c61c2ac3f6e2"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.510137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ea00e513-02cf-4951-b9ec-50966f982142","Type":"ContainerStarted","Data":"716709db730350454c6b698e80462da06d5fdd95d2f7ebf36e70feed7f8aa3a0"} Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.548038 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.54801115 podStartE2EDuration="2.54801115s" podCreationTimestamp="2026-02-18 14:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:23.535052884 +0000 UTC m=+1496.030773826" watchObservedRunningTime="2026-02-18 14:24:23.54801115 +0000 UTC m=+1496.043732092" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.551687 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.637770 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.639696 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.642880 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:24:23 crc kubenswrapper[4739]: I0218 14:24:23.651929 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.072600 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229118 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229472 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229664 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.229719 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") pod \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\" (UID: \"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c\") " Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.271024 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768" (OuterVolumeSpecName: "kube-api-access-l7768") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "kube-api-access-l7768". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.273230 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts" (OuterVolumeSpecName: "scripts") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.315610 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data" (OuterVolumeSpecName: "config-data") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.319352 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" (UID: "d4d2e1ea-d8fe-4724-becf-0a53840d8b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334307 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334353 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334369 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7768\" (UniqueName: \"kubernetes.io/projected/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-kube-api-access-l7768\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.334382 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.454749 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:24:24 crc kubenswrapper[4739]: E0218 14:24:24.459186 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerName="nova-cell1-conductor-db-sync" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.459243 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerName="nova-cell1-conductor-db-sync" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.459700 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" containerName="nova-cell1-conductor-db-sync" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.461498 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.482037 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.528587 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" event={"ID":"d4d2e1ea-d8fe-4724-becf-0a53840d8b5c","Type":"ContainerDied","Data":"a2ff715f6687dcb420415366f7ad28d9ba10898b955268123d1a60c93c36a991"} Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.528837 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ff715f6687dcb420415366f7ad28d9ba10898b955268123d1a60c93c36a991" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.528685 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7d9ft" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.539321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf"} Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.539398 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.544000 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.641951 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbltm\" (UniqueName: \"kubernetes.io/projected/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-kube-api-access-dbltm\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.642100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.642335 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.746184 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.746238 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbltm\" (UniqueName: \"kubernetes.io/projected/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-kube-api-access-dbltm\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.746367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.771304 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.807727 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.807884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.823486 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbltm\" (UniqueName: \"kubernetes.io/projected/ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0-kube-api-access-dbltm\") pod \"nova-cell1-conductor-0\" (UID: \"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0\") " pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.835181 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.924046 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.960555 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.960623 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.960673 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.963476 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.963917 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:24 crc kubenswrapper[4739]: I0218 14:24:24.964090 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070060 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070283 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.070383 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071124 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071289 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071339 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071674 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071726 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.071679 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.072236 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.092099 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.092269 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"dnsmasq-dns-f84f9ccf-8x5jn\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.201383 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:25 crc kubenswrapper[4739]: I0218 14:24:25.540475 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:25 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:25 crc kubenswrapper[4739]: > Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.345508 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 14:24:26 crc kubenswrapper[4739]: W0218 14:24:26.354568 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffa018e5_ca81_4d0e_86f7_a9c6fb25fdd0.slice/crio-48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671 WatchSource:0}: Error finding container 48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671: Status 404 returned error can't find the container with id 48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671 Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.360887 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.580396 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerStarted","Data":"de253019cab38f430ba5baf38246bca706fcc962369cf21cb7d0dd554226a189"} Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.587196 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0","Type":"ContainerStarted","Data":"48a05262d7d80a1ca09748961f034f03a5fa9db3c638f19b88e2a9df820d2671"} Feb 18 14:24:26 crc kubenswrapper[4739]: I0218 14:24:26.721891 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.314302 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerStarted","Data":"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602879 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" containerID="cri-o://941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602889 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" containerID="cri-o://7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.602941 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" containerID="cri-o://02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.603029 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" containerID="cri-o://5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.612169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerStarted","Data":"e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.612457 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.616515 4739 generic.go:334] "Generic (PLEG): container finished" podID="107ff6da-f0af-471c-bfaf-08364992c44e" containerID="0fa795a89771ccc792842d737411fc77aacef89807fe0ac39f6e7b6973469e7a" exitCode=0 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.616642 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerDied","Data":"0fa795a89771ccc792842d737411fc77aacef89807fe0ac39f6e7b6973469e7a"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.620396 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" containerID="cri-o://c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.621147 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0","Type":"ContainerStarted","Data":"4138276df8d1c07cc1007c7db31945d039be4e64d329aa76cf8b93546fa4145e"} Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.621184 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.621245 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" containerID="cri-o://8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" gracePeriod=30 Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.651225 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.52332394 podStartE2EDuration="18.651208954s" podCreationTimestamp="2026-02-18 14:24:09 +0000 UTC" firstStartedPulling="2026-02-18 14:24:10.949204041 +0000 UTC m=+1483.444924963" lastFinishedPulling="2026-02-18 14:24:26.077089045 +0000 UTC m=+1498.572809977" observedRunningTime="2026-02-18 14:24:27.642169336 +0000 UTC m=+1500.137890268" watchObservedRunningTime="2026-02-18 14:24:27.651208954 +0000 UTC m=+1500.146929876" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.690536 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.37799336 podStartE2EDuration="11.690516703s" podCreationTimestamp="2026-02-18 14:24:16 +0000 UTC" firstStartedPulling="2026-02-18 14:24:20.762567131 +0000 UTC m=+1493.258288053" lastFinishedPulling="2026-02-18 14:24:26.075090474 +0000 UTC m=+1498.570811396" observedRunningTime="2026-02-18 14:24:27.677766402 +0000 UTC m=+1500.173487324" watchObservedRunningTime="2026-02-18 14:24:27.690516703 +0000 UTC m=+1500.186237625" Feb 18 14:24:27 crc kubenswrapper[4739]: I0218 14:24:27.764844 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.764824844 podStartE2EDuration="3.764824844s" podCreationTimestamp="2026-02-18 14:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:27.755280063 +0000 UTC m=+1500.251000995" watchObservedRunningTime="2026-02-18 14:24:27.764824844 +0000 UTC m=+1500.260545766" Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.633026 4739 generic.go:334] "Generic (PLEG): container finished" podID="1abac962-efca-4430-8a58-ab62a802c442" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" exitCode=143 Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.633126 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerDied","Data":"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730"} Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.635558 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerStarted","Data":"6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b"} Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.635870 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.637608 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" exitCode=0 Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.637797 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5"} Feb 18 14:24:28 crc kubenswrapper[4739]: I0218 14:24:28.661566 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" podStartSLOduration=4.661543194 podStartE2EDuration="4.661543194s" podCreationTimestamp="2026-02-18 14:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:28.656004234 +0000 UTC m=+1501.151725176" watchObservedRunningTime="2026-02-18 14:24:28.661543194 +0000 UTC m=+1501.157264116" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.372886 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.373192 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.373241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.374095 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.374150 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" gracePeriod=600 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.491296 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:29 crc kubenswrapper[4739]: E0218 14:24:29.516655 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.652056 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" exitCode=0 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.652117 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3"} Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654394 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" exitCode=0 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654452 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124"} Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654510 4739 scope.go:117] "RemoveContainer" containerID="d7b9d56369135778a280da4378067ee9271657484f8ba97b96f463ca53b6178a" Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654674 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" containerID="cri-o://e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654710 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" containerID="cri-o://4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654779 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" containerID="cri-o://d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.654819 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" containerID="cri-o://e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083" gracePeriod=30 Feb 18 14:24:29 crc kubenswrapper[4739]: I0218 14:24:29.655352 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:24:29 crc kubenswrapper[4739]: E0218 14:24:29.655647 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:30 crc kubenswrapper[4739]: E0218 14:24:30.442179 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85906c1a_8b4b_4859_a6dc_08dd07710f2a.slice/crio-conmon-d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682822 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083" exitCode=0 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682851 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf" exitCode=2 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682859 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79" exitCode=0 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083"} Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682951 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf"} Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.682963 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79"} Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.685793 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" exitCode=0 Feb 18 14:24:30 crc kubenswrapper[4739]: I0218 14:24:30.685847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d"} Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.554338 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627434 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627593 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.627626 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") pod \"1abac962-efca-4430-8a58-ab62a802c442\" (UID: \"1abac962-efca-4430-8a58-ab62a802c442\") " Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.628018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs" (OuterVolumeSpecName: "logs") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.628239 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abac962-efca-4430-8a58-ab62a802c442-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.633604 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf" (OuterVolumeSpecName: "kube-api-access-wqqqf") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "kube-api-access-wqqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.670291 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.674917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data" (OuterVolumeSpecName: "config-data") pod "1abac962-efca-4430-8a58-ab62a802c442" (UID: "1abac962-efca-4430-8a58-ab62a802c442"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706859 4739 generic.go:334] "Generic (PLEG): container finished" podID="1abac962-efca-4430-8a58-ab62a802c442" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" exitCode=0 Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerDied","Data":"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed"} Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706932 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1abac962-efca-4430-8a58-ab62a802c442","Type":"ContainerDied","Data":"f9a2e2a20257041f47da0dff019617b0952ac1e5137c62cf8adc4e7b636524d9"} Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.706949 4739 scope.go:117] "RemoveContainer" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.707088 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.722340 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.733422 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.733476 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqqqf\" (UniqueName: \"kubernetes.io/projected/1abac962-efca-4430-8a58-ab62a802c442-kube-api-access-wqqqf\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.733488 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abac962-efca-4430-8a58-ab62a802c442-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.769241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.784420 4739 scope.go:117] "RemoveContainer" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.788096 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.801870 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.819352 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.819926 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.819949 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.819969 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.819976 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.822424 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-api" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.822470 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abac962-efca-4430-8a58-ab62a802c442" containerName="nova-api-log" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.823932 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.828013 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.828201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.828309 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.838796 4739 scope.go:117] "RemoveContainer" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.843812 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed\": container with ID starting with 8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed not found: ID does not exist" containerID="8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.844087 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed"} err="failed to get container status \"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed\": rpc error: code = NotFound desc = could not find container \"8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed\": container with ID starting with 8e17512c0f09d4dde6503476f90b696934a478425bd32a216302923c06a791ed not found: ID does not exist" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.844115 4739 scope.go:117] "RemoveContainer" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" Feb 18 14:24:31 crc kubenswrapper[4739]: E0218 14:24:31.850877 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730\": container with ID starting with c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730 not found: ID does not exist" containerID="c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.850924 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730"} err="failed to get container status \"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730\": rpc error: code = NotFound desc = could not find container \"c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730\": container with ID starting with c92ee9cf6ea2c5cce23f629e980326a4dfd4c3a47c8ba740f66c93f8b3541730 not found: ID does not exist" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.854699 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.938687 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.938748 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.938898 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.939305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.939361 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:31 crc kubenswrapper[4739]: I0218 14:24:31.939601 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043330 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043487 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043624 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.043670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.044174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.047919 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.048279 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.048560 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.065852 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.066178 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"nova-api-0\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.151941 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.433592 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abac962-efca-4430-8a58-ab62a802c442" path="/var/lib/kubelet/pods/1abac962-efca-4430-8a58-ab62a802c442/volumes" Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.670874 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.720112 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerStarted","Data":"f916a10a472599240a0b09bda183874925aa520b59c1c803a4b2bd0281891f10"} Feb 18 14:24:32 crc kubenswrapper[4739]: I0218 14:24:32.741800 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.746883 4739 generic.go:334] "Generic (PLEG): container finished" podID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerID="e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec" exitCode=0 Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.747008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec"} Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.751838 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerStarted","Data":"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2"} Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.751889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerStarted","Data":"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0"} Feb 18 14:24:33 crc kubenswrapper[4739]: I0218 14:24:33.784188 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.784162717 podStartE2EDuration="2.784162717s" podCreationTimestamp="2026-02-18 14:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:33.770027752 +0000 UTC m=+1506.265748694" watchObservedRunningTime="2026-02-18 14:24:33.784162717 +0000 UTC m=+1506.279883639" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.068837 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103592 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103832 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.103939 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.104023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.104066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.104114 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") pod \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\" (UID: \"85906c1a-8b4b-4859-a6dc-08dd07710f2a\") " Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.106110 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.106485 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.112108 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq" (OuterVolumeSpecName: "kube-api-access-xsfrq") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "kube-api-access-xsfrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.113664 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts" (OuterVolumeSpecName: "scripts") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.154740 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.196274 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207333 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207364 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207376 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207386 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsfrq\" (UniqueName: \"kubernetes.io/projected/85906c1a-8b4b-4859-a6dc-08dd07710f2a-kube-api-access-xsfrq\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207396 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.207404 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/85906c1a-8b4b-4859-a6dc-08dd07710f2a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.238001 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.306672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data" (OuterVolumeSpecName: "config-data") pod "85906c1a-8b4b-4859-a6dc-08dd07710f2a" (UID: "85906c1a-8b4b-4859-a6dc-08dd07710f2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.310063 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.310095 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85906c1a-8b4b-4859-a6dc-08dd07710f2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.768750 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.769872 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"85906c1a-8b4b-4859-a6dc-08dd07710f2a","Type":"ContainerDied","Data":"3cb69177aa55275b8d9b6fef13b5aac13b6cdb36cddbb51be35d3b65d87e5c5e"} Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.769911 4739 scope.go:117] "RemoveContainer" containerID="e29998f3df73b3af694e64620572379b35aa9549dde36a0d6b87129b31489083" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.802893 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.803555 4739 scope.go:117] "RemoveContainer" containerID="4291a3535ff05029212de02ed632a0f0afec9265ce8aaa061f3d8d796d1b98cf" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.816488 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.826737 4739 scope.go:117] "RemoveContainer" containerID="d766add10d6ad661f6c39400b544b5adb35172e4beaf44e23e8a240be708fe79" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.850499 4739 scope.go:117] "RemoveContainer" containerID="e4b12677a2033ce8ffaec9a3b3ba58a5ad30b2b8bfd0b94142bf853bf46354ec" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867385 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.867902 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867924 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.867946 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.867983 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.867989 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: E0218 14:24:34.868009 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868014 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868196 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="proxy-httpd" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868215 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="sg-core" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868236 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-central-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.868249 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" containerName="ceilometer-notification-agent" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.870337 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.872265 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.876002 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.880151 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.880471 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.925636 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.925697 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.925737 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.926598 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927156 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927277 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927354 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:34 crc kubenswrapper[4739]: I0218 14:24:34.927505 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.029967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030052 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030106 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030164 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030257 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030302 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030888 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.030935 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.031204 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.036069 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.036700 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.036786 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.038399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.041535 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.051575 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"ceilometer-0\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.124914 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.202938 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.216007 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.352872 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.353401 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" containerID="cri-o://94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc" gracePeriod=10 Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.515287 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:35 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:35 crc kubenswrapper[4739]: > Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.806554 4739 generic.go:334] "Generic (PLEG): container finished" podID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerID="94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc" exitCode=0 Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.806594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerDied","Data":"94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc"} Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.836631 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.903909 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.905621 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.913909 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.914118 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.928926 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963429 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:35 crc kubenswrapper[4739]: I0218 14:24:35.963888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.002439 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066567 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066646 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066679 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.066794 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067090 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") pod \"cb3e9cc3-348e-4556-89a2-ea261dd47147\" (UID: \"cb3e9cc3-348e-4556-89a2-ea261dd47147\") " Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067630 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067816 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.067957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.074773 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.088437 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.095879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.114215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"nova-cell1-cell-mapping-mvdqm\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.118735 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd" (OuterVolumeSpecName: "kube-api-access-p7bkd") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "kube-api-access-p7bkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.167408 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.171326 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7bkd\" (UniqueName: \"kubernetes.io/projected/cb3e9cc3-348e-4556-89a2-ea261dd47147-kube-api-access-p7bkd\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.171372 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.174604 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.175100 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.186720 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.212004 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config" (OuterVolumeSpecName: "config") pod "cb3e9cc3-348e-4556-89a2-ea261dd47147" (UID: "cb3e9cc3-348e-4556-89a2-ea261dd47147"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.232730 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273724 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273762 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273774 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.273786 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb3e9cc3-348e-4556-89a2-ea261dd47147-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.436687 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85906c1a-8b4b-4859-a6dc-08dd07710f2a" path="/var/lib/kubelet/pods/85906c1a-8b4b-4859-a6dc-08dd07710f2a/volumes" Feb 18 14:24:36 crc kubenswrapper[4739]: W0218 14:24:36.795398 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod147cff80_30af_4fc7_961f_5f6e17af51bb.slice/crio-7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096 WatchSource:0}: Error finding container 7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096: Status 404 returned error can't find the container with id 7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096 Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.802265 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.825057 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" event={"ID":"cb3e9cc3-348e-4556-89a2-ea261dd47147","Type":"ContainerDied","Data":"3735cb006b027d9cddfe7de2fdfabfbd28a60f1cc6094e080c7661fe3bdd11bf"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.825116 4739 scope.go:117] "RemoveContainer" containerID="94476dfafd6d1d5f23f9e15354d4a5e30397b87f6bed37cf1f501afccf7bb2cc" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.825242 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qmxqt" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.830307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.830352 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"238ba6fcba3c9aab1b9b714ffc70c837313da0593e88c1516a48844a82ac9503"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.832597 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerStarted","Data":"7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096"} Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.859652 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.871841 4739 scope.go:117] "RemoveContainer" containerID="21d6c1252de616814b74822ec06612c09a85d4a3dc10b578fb97435ea22e69d8" Feb 18 14:24:36 crc kubenswrapper[4739]: I0218 14:24:36.872121 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qmxqt"] Feb 18 14:24:37 crc kubenswrapper[4739]: I0218 14:24:37.847906 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd"} Feb 18 14:24:37 crc kubenswrapper[4739]: I0218 14:24:37.851521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerStarted","Data":"719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930"} Feb 18 14:24:38 crc kubenswrapper[4739]: I0218 14:24:38.436979 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" path="/var/lib/kubelet/pods/cb3e9cc3-348e-4556-89a2-ea261dd47147/volumes" Feb 18 14:24:38 crc kubenswrapper[4739]: I0218 14:24:38.471526 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mvdqm" podStartSLOduration=3.4715009070000002 podStartE2EDuration="3.471500907s" podCreationTimestamp="2026-02-18 14:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:37.873459025 +0000 UTC m=+1510.369179957" watchObservedRunningTime="2026-02-18 14:24:38.471500907 +0000 UTC m=+1510.967221829" Feb 18 14:24:38 crc kubenswrapper[4739]: I0218 14:24:38.867047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7"} Feb 18 14:24:40 crc kubenswrapper[4739]: I0218 14:24:40.895788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerStarted","Data":"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be"} Feb 18 14:24:40 crc kubenswrapper[4739]: I0218 14:24:40.896359 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:24:40 crc kubenswrapper[4739]: I0218 14:24:40.935067 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.277022893 podStartE2EDuration="6.935044823s" podCreationTimestamp="2026-02-18 14:24:34 +0000 UTC" firstStartedPulling="2026-02-18 14:24:35.89508807 +0000 UTC m=+1508.390808992" lastFinishedPulling="2026-02-18 14:24:40.55311 +0000 UTC m=+1513.048830922" observedRunningTime="2026-02-18 14:24:40.923847041 +0000 UTC m=+1513.419567963" watchObservedRunningTime="2026-02-18 14:24:40.935044823 +0000 UTC m=+1513.430765765" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.152318 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.152991 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.414827 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:24:42 crc kubenswrapper[4739]: E0218 14:24:42.415203 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.924941 4739 generic.go:334] "Generic (PLEG): container finished" podID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerID="719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930" exitCode=0 Feb 18 14:24:42 crc kubenswrapper[4739]: I0218 14:24:42.924991 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerDied","Data":"719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930"} Feb 18 14:24:43 crc kubenswrapper[4739]: I0218 14:24:43.171702 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:43 crc kubenswrapper[4739]: I0218 14:24:43.171702 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.3:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.499338 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591651 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591725 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591806 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.591851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") pod \"147cff80-30af-4fc7-961f-5f6e17af51bb\" (UID: \"147cff80-30af-4fc7-961f-5f6e17af51bb\") " Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.610556 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts" (OuterVolumeSpecName: "scripts") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.610568 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7" (OuterVolumeSpecName: "kube-api-access-p9ld7") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "kube-api-access-p9ld7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.629957 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.632672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data" (OuterVolumeSpecName: "config-data") pod "147cff80-30af-4fc7-961f-5f6e17af51bb" (UID: "147cff80-30af-4fc7-961f-5f6e17af51bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695352 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695389 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9ld7\" (UniqueName: \"kubernetes.io/projected/147cff80-30af-4fc7-961f-5f6e17af51bb-kube-api-access-p9ld7\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695403 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.695435 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/147cff80-30af-4fc7-961f-5f6e17af51bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.951894 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mvdqm" event={"ID":"147cff80-30af-4fc7-961f-5f6e17af51bb","Type":"ContainerDied","Data":"7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096"} Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.951942 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7262bc61ab6d16b820ba5ec18f0720332300bcbef4ac82b91ce508f15faf1096" Feb 18 14:24:44 crc kubenswrapper[4739]: I0218 14:24:44.951984 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mvdqm" Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.237334 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.237832 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" containerID="cri-o://4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.237905 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" containerID="cri-o://fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.272339 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.272626 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" containerID="cri-o://8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.337569 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.337942 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" containerID="cri-o://82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.337865 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" containerID="cri-o://9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176" gracePeriod=30 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.510053 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:45 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:45 crc kubenswrapper[4739]: > Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.965038 4739 generic.go:334] "Generic (PLEG): container finished" podID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerID="9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176" exitCode=143 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.965150 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerDied","Data":"9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176"} Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.967262 4739 generic.go:334] "Generic (PLEG): container finished" podID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" exitCode=143 Feb 18 14:24:45 crc kubenswrapper[4739]: I0218 14:24:45.967297 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerDied","Data":"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0"} Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.320985 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.329968 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.334790 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 14:24:46 crc kubenswrapper[4739]: E0218 14:24:46.334857 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.448603 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.571899 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") pod \"2c9cba7f-9b49-4413-a546-9ecf1950d543\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.572101 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") pod \"2c9cba7f-9b49-4413-a546-9ecf1950d543\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.572285 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") pod \"2c9cba7f-9b49-4413-a546-9ecf1950d543\" (UID: \"2c9cba7f-9b49-4413-a546-9ecf1950d543\") " Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.578896 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6" (OuterVolumeSpecName: "kube-api-access-dhdw6") pod "2c9cba7f-9b49-4413-a546-9ecf1950d543" (UID: "2c9cba7f-9b49-4413-a546-9ecf1950d543"). InnerVolumeSpecName "kube-api-access-dhdw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.610120 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data" (OuterVolumeSpecName: "config-data") pod "2c9cba7f-9b49-4413-a546-9ecf1950d543" (UID: "2c9cba7f-9b49-4413-a546-9ecf1950d543"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.612747 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c9cba7f-9b49-4413-a546-9ecf1950d543" (UID: "2c9cba7f-9b49-4413-a546-9ecf1950d543"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.675916 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.675957 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhdw6\" (UniqueName: \"kubernetes.io/projected/2c9cba7f-9b49-4413-a546-9ecf1950d543-kube-api-access-dhdw6\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.675973 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c9cba7f-9b49-4413-a546-9ecf1950d543-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989263 4739 generic.go:334] "Generic (PLEG): container finished" podID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" exitCode=0 Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989331 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989352 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerDied","Data":"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7"} Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989646 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2c9cba7f-9b49-4413-a546-9ecf1950d543","Type":"ContainerDied","Data":"55bf56fc29bc6c5c7c73f1b370236bcbca1545fe9a2d06fed65e1f34bd49bd9b"} Feb 18 14:24:47 crc kubenswrapper[4739]: I0218 14:24:47.989667 4739 scope.go:117] "RemoveContainer" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.026832 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.030272 4739 scope.go:117] "RemoveContainer" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.030845 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7\": container with ID starting with 8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7 not found: ID does not exist" containerID="8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.030889 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7"} err="failed to get container status \"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7\": rpc error: code = NotFound desc = could not find container \"8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7\": container with ID starting with 8fbc8f84209b416a34fed68560a1e9ae5e75b56cdcc1fb6953941c78922ad2b7 not found: ID does not exist" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.038833 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.064534 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065555 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="init" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065580 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="init" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065597 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerName="nova-manage" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065605 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerName="nova-manage" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065672 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065683 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:48 crc kubenswrapper[4739]: E0218 14:24:48.065704 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.065713 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.066143 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" containerName="nova-manage" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.066175 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" containerName="nova-scheduler-scheduler" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.066225 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3e9cc3-348e-4556-89a2-ea261dd47147" containerName="dnsmasq-dns" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.071860 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.080236 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.098050 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.192759 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnccw\" (UniqueName: \"kubernetes.io/projected/ba769c63-86fa-4971-afd8-4e3a57c94c37-kube-api-access-rnccw\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.193417 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-config-data\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.193972 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.296751 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-config-data\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.296959 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.297071 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnccw\" (UniqueName: \"kubernetes.io/projected/ba769c63-86fa-4971-afd8-4e3a57c94c37-kube-api-access-rnccw\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.303998 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-config-data\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.315041 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba769c63-86fa-4971-afd8-4e3a57c94c37-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.315741 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnccw\" (UniqueName: \"kubernetes.io/projected/ba769c63-86fa-4971-afd8-4e3a57c94c37-kube-api-access-rnccw\") pod \"nova-scheduler-0\" (UID: \"ba769c63-86fa-4971-afd8-4e3a57c94c37\") " pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.411112 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.426524 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9cba7f-9b49-4413-a546-9ecf1950d543" path="/var/lib/kubelet/pods/2c9cba7f-9b49-4413-a546-9ecf1950d543/volumes" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.476986 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:51350->10.217.0.250:8775: read: connection reset by peer" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.477164 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": read tcp 10.217.0.2:51352->10.217.0.250:8775: read: connection reset by peer" Feb 18 14:24:48 crc kubenswrapper[4739]: I0218 14:24:48.876271 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006415 4739 generic.go:334] "Generic (PLEG): container finished" podID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" exitCode=0 Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006519 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerDied","Data":"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2"} Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006527 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006556 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61e22e5d-021a-404b-b763-cf02d6f2bc9e","Type":"ContainerDied","Data":"f916a10a472599240a0b09bda183874925aa520b59c1c803a4b2bd0281891f10"} Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.006581 4739 scope.go:117] "RemoveContainer" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012623 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012677 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012772 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.012954 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.013099 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.013201 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") pod \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\" (UID: \"61e22e5d-021a-404b-b763-cf02d6f2bc9e\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.014163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs" (OuterVolumeSpecName: "logs") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.014518 4739 generic.go:334] "Generic (PLEG): container finished" podID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerID="82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe" exitCode=0 Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.014552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerDied","Data":"82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe"} Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.015006 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61e22e5d-021a-404b-b763-cf02d6f2bc9e-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.019371 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx" (OuterVolumeSpecName: "kube-api-access-mksjx") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "kube-api-access-mksjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.019407 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.062972 4739 scope.go:117] "RemoveContainer" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.072594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.074617 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data" (OuterVolumeSpecName: "config-data") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.097901 4739 scope.go:117] "RemoveContainer" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.101281 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2\": container with ID starting with fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2 not found: ID does not exist" containerID="fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.101343 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2"} err="failed to get container status \"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2\": rpc error: code = NotFound desc = could not find container \"fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2\": container with ID starting with fee0671017861e27d13abe236945225b9ed63047d86ba210d83f1897165449e2 not found: ID does not exist" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.101378 4739 scope.go:117] "RemoveContainer" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.102094 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0\": container with ID starting with 4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0 not found: ID does not exist" containerID="4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.102292 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0"} err="failed to get container status \"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0\": rpc error: code = NotFound desc = could not find container \"4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0\": container with ID starting with 4707338df27e82b2e76c2c061d8f09857d095c2cd625ed48dbf4960e1983d6d0 not found: ID does not exist" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.105089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.123222 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.123289 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.123320 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mksjx\" (UniqueName: \"kubernetes.io/projected/61e22e5d-021a-404b-b763-cf02d6f2bc9e-kube-api-access-mksjx\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.124328 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: W0218 14:24:49.129941 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba769c63_86fa_4971_afd8_4e3a57c94c37.slice/crio-7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8 WatchSource:0}: Error finding container 7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8: Status 404 returned error can't find the container with id 7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8 Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.147764 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.154454 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61e22e5d-021a-404b-b763-cf02d6f2bc9e" (UID: "61e22e5d-021a-404b-b763-cf02d6f2bc9e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226153 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226543 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.226652 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.227093 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs" (OuterVolumeSpecName: "logs") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.227638 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") pod \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\" (UID: \"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9\") " Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.228578 4739 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.228603 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61e22e5d-021a-404b-b763-cf02d6f2bc9e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.231632 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct" (OuterVolumeSpecName: "kube-api-access-kxkct") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "kube-api-access-kxkct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.269769 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.281502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data" (OuterVolumeSpecName: "config-data") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.330782 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.331104 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.331117 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxkct\" (UniqueName: \"kubernetes.io/projected/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-kube-api-access-kxkct\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.343504 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" (UID: "9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.433392 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.506073 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.524887 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.538634 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539226 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539249 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539279 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539287 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539314 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" Feb 18 14:24:49 crc kubenswrapper[4739]: E0218 14:24:49.539344 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539353 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539658 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539684 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-api" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539707 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" containerName="nova-api-log" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.539722 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" containerName="nova-metadata-metadata" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.541122 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.544604 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.544909 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.544954 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.553798 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636631 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvr7\" (UniqueName: \"kubernetes.io/projected/3797374a-f0e4-4ba5-8974-c0049bad543a-kube-api-access-vgvr7\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3797374a-f0e4-4ba5-8974-c0049bad543a-logs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-config-data\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636825 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-public-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.636891 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739028 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvr7\" (UniqueName: \"kubernetes.io/projected/3797374a-f0e4-4ba5-8974-c0049bad543a-kube-api-access-vgvr7\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3797374a-f0e4-4ba5-8974-c0049bad543a-logs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739526 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.739786 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3797374a-f0e4-4ba5-8974-c0049bad543a-logs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.742026 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-config-data\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.742154 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-public-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.742410 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.743731 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.745292 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-config-data\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.745307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-public-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.745974 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3797374a-f0e4-4ba5-8974-c0049bad543a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.757313 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvr7\" (UniqueName: \"kubernetes.io/projected/3797374a-f0e4-4ba5-8974-c0049bad543a-kube-api-access-vgvr7\") pod \"nova-api-0\" (UID: \"3797374a-f0e4-4ba5-8974-c0049bad543a\") " pod="openstack/nova-api-0" Feb 18 14:24:49 crc kubenswrapper[4739]: I0218 14:24:49.862575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.028137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba769c63-86fa-4971-afd8-4e3a57c94c37","Type":"ContainerStarted","Data":"1636797c88c7bca5dee0562720c817ad4b49b23532aeac6a2f073a39b49226f8"} Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.028180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ba769c63-86fa-4971-afd8-4e3a57c94c37","Type":"ContainerStarted","Data":"7e3d643592b986c6aa092e3f0c21e8cd6b542f4411dc9f2b5ee9e6c549923bf8"} Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.039904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9","Type":"ContainerDied","Data":"bfd6dae4fb10d51320c5b40851cb77928f9eb337a4774f99be8d60a2033f0bdc"} Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.039977 4739 scope.go:117] "RemoveContainer" containerID="82597e5883ccf1e7783fac27d49ed242689bb7c4947b55ae4f7dbaeea0b394fe" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.040026 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.086683 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.086640823 podStartE2EDuration="2.086640823s" podCreationTimestamp="2026-02-18 14:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:50.044875162 +0000 UTC m=+1522.540596084" watchObservedRunningTime="2026-02-18 14:24:50.086640823 +0000 UTC m=+1522.582361735" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.140997 4739 scope.go:117] "RemoveContainer" containerID="9b767ad311330c4e783eb9ba94b73f05cfa35a7e1442008a10e0fcd720bff176" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.154681 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.175844 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.189407 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.191203 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.193391 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.193500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.214499 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364268 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-logs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364467 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364499 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-config-data\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364536 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.364637 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6fhm\" (UniqueName: \"kubernetes.io/projected/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-kube-api-access-s6fhm\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: W0218 14:24:50.396649 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3797374a_f0e4_4ba5_8974_c0049bad543a.slice/crio-b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97 WatchSource:0}: Error finding container b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97: Status 404 returned error can't find the container with id b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97 Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.399695 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.428073 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e22e5d-021a-404b-b763-cf02d6f2bc9e" path="/var/lib/kubelet/pods/61e22e5d-021a-404b-b763-cf02d6f2bc9e/volumes" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.429059 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9" path="/var/lib/kubelet/pods/9eb3f59c-d6e1-4eb7-ad1d-75644646a2f9/volumes" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.466379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-logs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.466725 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.466853 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-config-data\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.467638 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.467798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6fhm\" (UniqueName: \"kubernetes.io/projected/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-kube-api-access-s6fhm\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.468103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-logs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.473545 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-config-data\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.474851 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.477539 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.486698 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6fhm\" (UniqueName: \"kubernetes.io/projected/2ab30c1a-7b94-430a-ac85-ebe051fadbfe-kube-api-access-s6fhm\") pod \"nova-metadata-0\" (UID: \"2ab30c1a-7b94-430a-ac85-ebe051fadbfe\") " pod="openstack/nova-metadata-0" Feb 18 14:24:50 crc kubenswrapper[4739]: I0218 14:24:50.519829 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.028706 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.055873 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab30c1a-7b94-430a-ac85-ebe051fadbfe","Type":"ContainerStarted","Data":"b7949bf0504636ba7470d86467b8f7a73f72aaed74e2bece861ff361637d8ca6"} Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.060911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3797374a-f0e4-4ba5-8974-c0049bad543a","Type":"ContainerStarted","Data":"63b5fc512db3014a7d27150983656813bfb6384f0c18e481a78d2d5a2cf9e2de"} Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.060962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3797374a-f0e4-4ba5-8974-c0049bad543a","Type":"ContainerStarted","Data":"b0289f81c423956ccff6abd65e2cc7e54fe9cd32532b4b892ccd50bb8c16fe97"} Feb 18 14:24:51 crc kubenswrapper[4739]: I0218 14:24:51.094807 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.094782008 podStartE2EDuration="2.094782008s" podCreationTimestamp="2026-02-18 14:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:51.083920044 +0000 UTC m=+1523.579640986" watchObservedRunningTime="2026-02-18 14:24:51.094782008 +0000 UTC m=+1523.590502940" Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.086550 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab30c1a-7b94-430a-ac85-ebe051fadbfe","Type":"ContainerStarted","Data":"02ec586e36f7939cda1f715fd21a9c1aac1cb9c54f06b99b38b45d3c69507700"} Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.087184 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2ab30c1a-7b94-430a-ac85-ebe051fadbfe","Type":"ContainerStarted","Data":"168c96ecd94121fa27a50b0fd7a3cbd831d0d9dc5f7694db3143ce6d5c7a4ac4"} Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.091752 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3797374a-f0e4-4ba5-8974-c0049bad543a","Type":"ContainerStarted","Data":"9a3e3811d24c3d72675df149801c40c079f95efb8af67e161c27273ea4b83485"} Feb 18 14:24:52 crc kubenswrapper[4739]: I0218 14:24:52.121941 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.12191616 podStartE2EDuration="2.12191616s" podCreationTimestamp="2026-02-18 14:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:24:52.10998172 +0000 UTC m=+1524.605702662" watchObservedRunningTime="2026-02-18 14:24:52.12191616 +0000 UTC m=+1524.617637082" Feb 18 14:24:53 crc kubenswrapper[4739]: I0218 14:24:53.410355 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:24:53 crc kubenswrapper[4739]: E0218 14:24:53.410964 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:24:53 crc kubenswrapper[4739]: I0218 14:24:53.411179 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 14:24:55 crc kubenswrapper[4739]: I0218 14:24:55.495727 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:24:55 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:24:55 crc kubenswrapper[4739]: > Feb 18 14:24:55 crc kubenswrapper[4739]: I0218 14:24:55.520471 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:55 crc kubenswrapper[4739]: I0218 14:24:55.520839 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.148964 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.163955 4739 generic.go:334] "Generic (PLEG): container finished" podID="42803b7f-4360-4d79-94e6-ab17944142ab" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" exitCode=137 Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164007 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164023 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db"} Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"42803b7f-4360-4d79-94e6-ab17944142ab","Type":"ContainerDied","Data":"8762dd17c92d0766d85297d3b8ff657afb0c476107270f6df46caae48fe9cee4"} Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.164232 4739 scope.go:117] "RemoveContainer" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.232359 4739 scope.go:117] "RemoveContainer" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.273282 4739 scope.go:117] "RemoveContainer" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.274918 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.275217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.275269 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.275340 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") pod \"42803b7f-4360-4d79-94e6-ab17944142ab\" (UID: \"42803b7f-4360-4d79-94e6-ab17944142ab\") " Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.283218 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts" (OuterVolumeSpecName: "scripts") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.300536 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt" (OuterVolumeSpecName: "kube-api-access-hmddt") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "kube-api-access-hmddt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.379404 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmddt\" (UniqueName: \"kubernetes.io/projected/42803b7f-4360-4d79-94e6-ab17944142ab-kube-api-access-hmddt\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.379464 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.435792 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.457282 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.457718 4739 scope.go:117] "RemoveContainer" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.467823 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data" (OuterVolumeSpecName: "config-data") pod "42803b7f-4360-4d79-94e6-ab17944142ab" (UID: "42803b7f-4360-4d79-94e6-ab17944142ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.472293 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.484713 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.484989 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42803b7f-4360-4d79-94e6-ab17944142ab-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.509863 4739 scope.go:117] "RemoveContainer" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.510381 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db\": container with ID starting with 7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db not found: ID does not exist" containerID="7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510409 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db"} err="failed to get container status \"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db\": rpc error: code = NotFound desc = could not find container \"7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db\": container with ID starting with 7c2c99ad8f5f0dcd59450b79c08ee6065c90a75e54a8f4667a4a38acc67d60db not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510430 4739 scope.go:117] "RemoveContainer" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.510760 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d\": container with ID starting with 02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d not found: ID does not exist" containerID="02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510811 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d"} err="failed to get container status \"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d\": rpc error: code = NotFound desc = could not find container \"02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d\": container with ID starting with 02ed912c8de7f924761f0b7c0d93ebd19677da80caa953426dde9fa5baa2e95d not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.510840 4739 scope.go:117] "RemoveContainer" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.511110 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3\": container with ID starting with 5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3 not found: ID does not exist" containerID="5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.511140 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3"} err="failed to get container status \"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3\": rpc error: code = NotFound desc = could not find container \"5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3\": container with ID starting with 5d2d8d0b1c0ed0573b36cc7742b1fdb01870aaa18e9a96a029c2751545df63c3 not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.511160 4739 scope.go:117] "RemoveContainer" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.511366 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5\": container with ID starting with 941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5 not found: ID does not exist" containerID="941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.511390 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5"} err="failed to get container status \"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5\": rpc error: code = NotFound desc = could not find container \"941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5\": container with ID starting with 941d892baee1cee8fcb10f6d346f4642b7f9ffd28461960a3d3aaa9787f6b3d5 not found: ID does not exist" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.812126 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.836158 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854205 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854784 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854804 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854816 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854823 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854862 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854871 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" Feb 18 14:24:58 crc kubenswrapper[4739]: E0218 14:24:58.854881 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.854887 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855134 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-listener" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855146 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-api" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855159 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-evaluator" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.855171 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" containerName="aodh-notifier" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.857422 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.859949 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860206 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860486 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.860568 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:24:58 crc kubenswrapper[4739]: I0218 14:24:58.872888 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000562 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000621 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000681 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000915 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.000992 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103307 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103365 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103415 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.103706 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.107341 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.107920 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.108425 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.108562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.108975 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.123706 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"aodh-0\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.198609 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.216894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 14:24:59 crc kubenswrapper[4739]: W0218 14:24:59.734777 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7f699b8_95a0_4a37_8a9b_fb4bd7b46d3e.slice/crio-4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193 WatchSource:0}: Error finding container 4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193: Status 404 returned error can't find the container with id 4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193 Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.737763 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.863283 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:24:59 crc kubenswrapper[4739]: I0218 14:24:59.863363 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.199185 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193"} Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.426486 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42803b7f-4360-4d79-94e6-ab17944142ab" path="/var/lib/kubelet/pods/42803b7f-4360-4d79-94e6-ab17944142ab/volumes" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.522595 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.522971 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.883652 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3797374a-f0e4-4ba5-8974-c0049bad543a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:00 crc kubenswrapper[4739]: I0218 14:25:00.883742 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3797374a-f0e4-4ba5-8974-c0049bad543a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:01 crc kubenswrapper[4739]: I0218 14:25:01.212021 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} Feb 18 14:25:01 crc kubenswrapper[4739]: I0218 14:25:01.534632 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ab30c1a-7b94-430a-ac85-ebe051fadbfe" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:01 crc kubenswrapper[4739]: I0218 14:25:01.534813 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2ab30c1a-7b94-430a-ac85-ebe051fadbfe" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.8:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 14:25:02 crc kubenswrapper[4739]: I0218 14:25:02.227360 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} Feb 18 14:25:03 crc kubenswrapper[4739]: I0218 14:25:03.363691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} Feb 18 14:25:04 crc kubenswrapper[4739]: I0218 14:25:04.380509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerStarted","Data":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} Feb 18 14:25:04 crc kubenswrapper[4739]: I0218 14:25:04.410960 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.930595528 podStartE2EDuration="6.410940657s" podCreationTimestamp="2026-02-18 14:24:58 +0000 UTC" firstStartedPulling="2026-02-18 14:24:59.737422087 +0000 UTC m=+1532.233143009" lastFinishedPulling="2026-02-18 14:25:03.217767216 +0000 UTC m=+1535.713488138" observedRunningTime="2026-02-18 14:25:04.404250399 +0000 UTC m=+1536.899971341" watchObservedRunningTime="2026-02-18 14:25:04.410940657 +0000 UTC m=+1536.906661579" Feb 18 14:25:05 crc kubenswrapper[4739]: I0218 14:25:05.241618 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:25:05 crc kubenswrapper[4739]: I0218 14:25:05.431203 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:05 crc kubenswrapper[4739]: E0218 14:25:05.431595 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:05 crc kubenswrapper[4739]: I0218 14:25:05.524540 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" probeResult="failure" output=< Feb 18 14:25:05 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:25:05 crc kubenswrapper[4739]: > Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.174032 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.178645 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.222665 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.254678 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.254798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.254850 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.356616 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357115 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.357987 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.392275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"certified-operators-p9dsf\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:06 crc kubenswrapper[4739]: I0218 14:25:06.503527 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.056028 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.497596 4739 generic.go:334] "Generic (PLEG): container finished" podID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerID="4b0e7c8eb140916b6e74a21779841b69e440908d6b9c1495731308eccfead9ee" exitCode=0 Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.497655 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"4b0e7c8eb140916b6e74a21779841b69e440908d6b9c1495731308eccfead9ee"} Feb 18 14:25:07 crc kubenswrapper[4739]: I0218 14:25:07.497705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerStarted","Data":"7c5472457e574250ce229e71104c3d275504739870445d71eac020fa408b2be9"} Feb 18 14:25:08 crc kubenswrapper[4739]: I0218 14:25:08.510925 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerStarted","Data":"a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e"} Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.914595 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.915226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.928529 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 14:25:09 crc kubenswrapper[4739]: I0218 14:25:09.938124 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.527166 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.531900 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.533903 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.536257 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:25:10 crc kubenswrapper[4739]: I0218 14:25:10.549785 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 14:25:11 crc kubenswrapper[4739]: I0218 14:25:11.544660 4739 generic.go:334] "Generic (PLEG): container finished" podID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerID="a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e" exitCode=0 Feb 18 14:25:11 crc kubenswrapper[4739]: I0218 14:25:11.545354 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e"} Feb 18 14:25:11 crc kubenswrapper[4739]: I0218 14:25:11.550486 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.551032 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.553811 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.558506 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerStarted","Data":"22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051"} Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.570368 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.631008 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9dsf" podStartSLOduration=2.010775503 podStartE2EDuration="6.630983901s" podCreationTimestamp="2026-02-18 14:25:06 +0000 UTC" firstStartedPulling="2026-02-18 14:25:07.499661518 +0000 UTC m=+1539.995382440" lastFinishedPulling="2026-02-18 14:25:12.119869916 +0000 UTC m=+1544.615590838" observedRunningTime="2026-02-18 14:25:12.623127563 +0000 UTC m=+1545.118848495" watchObservedRunningTime="2026-02-18 14:25:12.630983901 +0000 UTC m=+1545.126704823" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.726826 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.727822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.728178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.830507 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.830703 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.830777 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.831389 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.831544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.856282 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"redhat-marketplace-d2qcv\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:12 crc kubenswrapper[4739]: I0218 14:25:12.873143 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:13 crc kubenswrapper[4739]: I0218 14:25:13.653351 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:13 crc kubenswrapper[4739]: W0218 14:25:13.653831 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51 WatchSource:0}: Error finding container 2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51: Status 404 returned error can't find the container with id 2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51 Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.508137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.572221 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.628740 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerID="6c0ee0eafacbca4301c6ded44d73ba09227c9ee1f2e6957623ca4214bd62e5df" exitCode=0 Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.629927 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"6c0ee0eafacbca4301c6ded44d73ba09227c9ee1f2e6957623ca4214bd62e5df"} Feb 18 14:25:14 crc kubenswrapper[4739]: I0218 14:25:14.630011 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerStarted","Data":"2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51"} Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.504116 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.504712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.563343 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.664823 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerStarted","Data":"eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1"} Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.750314 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:25:16 crc kubenswrapper[4739]: I0218 14:25:16.750625 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wg5zz" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" containerID="cri-o://efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec" gracePeriod=2 Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.687178 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerID="eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1" exitCode=0 Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.687250 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1"} Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.693355 4739 generic.go:334] "Generic (PLEG): container finished" podID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerID="efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec" exitCode=0 Feb 18 14:25:17 crc kubenswrapper[4739]: I0218 14:25:17.693398 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec"} Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.037014 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.180498 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") pod \"0bbaed51-382b-4b1b-8b3f-95521f415472\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.180585 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") pod \"0bbaed51-382b-4b1b-8b3f-95521f415472\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.180870 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") pod \"0bbaed51-382b-4b1b-8b3f-95521f415472\" (UID: \"0bbaed51-382b-4b1b-8b3f-95521f415472\") " Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.182184 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities" (OuterVolumeSpecName: "utilities") pod "0bbaed51-382b-4b1b-8b3f-95521f415472" (UID: "0bbaed51-382b-4b1b-8b3f-95521f415472"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.187562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm" (OuterVolumeSpecName: "kube-api-access-724gm") pod "0bbaed51-382b-4b1b-8b3f-95521f415472" (UID: "0bbaed51-382b-4b1b-8b3f-95521f415472"). InnerVolumeSpecName "kube-api-access-724gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.283213 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.283249 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-724gm\" (UniqueName: \"kubernetes.io/projected/0bbaed51-382b-4b1b-8b3f-95521f415472-kube-api-access-724gm\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.314246 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bbaed51-382b-4b1b-8b3f-95521f415472" (UID: "0bbaed51-382b-4b1b-8b3f-95521f415472"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.386035 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bbaed51-382b-4b1b-8b3f-95521f415472-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.706997 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerStarted","Data":"2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d"} Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.710182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg5zz" event={"ID":"0bbaed51-382b-4b1b-8b3f-95521f415472","Type":"ContainerDied","Data":"8246321a9a69ef9443f0eafe62f613f2bf2304eee3857bb71521e44ea71bf052"} Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.710232 4739 scope.go:117] "RemoveContainer" containerID="efd61b74e3eaf8a43ba51f508d08a1af562b43d4efba62cb59c8fb5bbe916eec" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.710277 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg5zz" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.735129 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d2qcv" podStartSLOduration=3.284026877 podStartE2EDuration="6.735103679s" podCreationTimestamp="2026-02-18 14:25:12 +0000 UTC" firstStartedPulling="2026-02-18 14:25:14.633728459 +0000 UTC m=+1547.129449381" lastFinishedPulling="2026-02-18 14:25:18.084805261 +0000 UTC m=+1550.580526183" observedRunningTime="2026-02-18 14:25:18.729700683 +0000 UTC m=+1551.225421645" watchObservedRunningTime="2026-02-18 14:25:18.735103679 +0000 UTC m=+1551.230824601" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.755851 4739 scope.go:117] "RemoveContainer" containerID="0ed9ea0acaa9a000246ad43383e3ff8712eb08ccc211dd774ede3a75ac80e158" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.772144 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.786978 4739 scope.go:117] "RemoveContainer" containerID="6869795123dd672f097b8cf90d0e5e277663d03ea727ac622ba0a62b525526df" Feb 18 14:25:18 crc kubenswrapper[4739]: I0218 14:25:18.789239 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wg5zz"] Feb 18 14:25:20 crc kubenswrapper[4739]: I0218 14:25:20.412303 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:20 crc kubenswrapper[4739]: E0218 14:25:20.413046 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:20 crc kubenswrapper[4739]: I0218 14:25:20.424653 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" path="/var/lib/kubelet/pods/0bbaed51-382b-4b1b-8b3f-95521f415472/volumes" Feb 18 14:25:22 crc kubenswrapper[4739]: I0218 14:25:22.873865 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:22 crc kubenswrapper[4739]: I0218 14:25:22.875338 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:22 crc kubenswrapper[4739]: I0218 14:25:22.930592 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.006484 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.017691 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-2dhxm"] Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.073482 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:25:23 crc kubenswrapper[4739]: E0218 14:25:23.073964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-utilities" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.073982 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-utilities" Feb 18 14:25:23 crc kubenswrapper[4739]: E0218 14:25:23.073993 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.073999 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" Feb 18 14:25:23 crc kubenswrapper[4739]: E0218 14:25:23.074029 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-content" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.074035 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="extract-content" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.074310 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bbaed51-382b-4b1b-8b3f-95521f415472" containerName="registry-server" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.075305 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.099619 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.211431 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.211747 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.212104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.314281 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.314426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.314574 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.320681 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.331000 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.341023 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"heat-db-sync-zq8vc\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.399532 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.863356 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:23 crc kubenswrapper[4739]: I0218 14:25:23.892700 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:25:23 crc kubenswrapper[4739]: W0218 14:25:23.895916 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e0a952f_ef12_46c6_8ca8_10f016b441be.slice/crio-254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a WatchSource:0}: Error finding container 254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a: Status 404 returned error can't find the container with id 254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a Feb 18 14:25:24 crc kubenswrapper[4739]: I0218 14:25:24.553378 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3edd4390-e376-469a-b7c5-9bd7bf9dd100" path="/var/lib/kubelet/pods/3edd4390-e376-469a-b7c5-9bd7bf9dd100/volumes" Feb 18 14:25:24 crc kubenswrapper[4739]: I0218 14:25:24.813098 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerStarted","Data":"254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a"} Feb 18 14:25:24 crc kubenswrapper[4739]: I0218 14:25:24.945041 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.768424 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769363 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" containerID="cri-o://3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769523 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" containerID="cri-o://6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769573 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" containerID="cri-o://9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.769609 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" containerID="cri-o://251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" gracePeriod=30 Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.885446 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:25 crc kubenswrapper[4739]: I0218 14:25:25.983485 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.623993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.845827 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" exitCode=0 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.845868 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" exitCode=2 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.845880 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" exitCode=0 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846091 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d2qcv" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" containerID="cri-o://2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d" gracePeriod=2 Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846382 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be"} Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7"} Feb 18 14:25:26 crc kubenswrapper[4739]: I0218 14:25:26.846417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.360292 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.361984 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p9dsf" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" containerID="cri-o://22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051" gracePeriod=2 Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.894415 4739 generic.go:334] "Generic (PLEG): container finished" podID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerID="22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051" exitCode=0 Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.894591 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.899988 4739 generic.go:334] "Generic (PLEG): container finished" podID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerID="2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d" exitCode=0 Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.900027 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.900052 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2qcv" event={"ID":"0a62b266-b24d-47e5-ae8d-cb8524e1d628","Type":"ContainerDied","Data":"2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51"} Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.900063 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51" Feb 18 14:25:27 crc kubenswrapper[4739]: I0218 14:25:27.946478 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.004375 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") pod \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.004526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") pod \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.004591 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") pod \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\" (UID: \"0a62b266-b24d-47e5-ae8d-cb8524e1d628\") " Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.006610 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities" (OuterVolumeSpecName: "utilities") pod "0a62b266-b24d-47e5-ae8d-cb8524e1d628" (UID: "0a62b266-b24d-47e5-ae8d-cb8524e1d628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.033991 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz" (OuterVolumeSpecName: "kube-api-access-9pkwz") pod "0a62b266-b24d-47e5-ae8d-cb8524e1d628" (UID: "0a62b266-b24d-47e5-ae8d-cb8524e1d628"). InnerVolumeSpecName "kube-api-access-9pkwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.072780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a62b266-b24d-47e5-ae8d-cb8524e1d628" (UID: "0a62b266-b24d-47e5-ae8d-cb8524e1d628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.109792 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pkwz\" (UniqueName: \"kubernetes.io/projected/0a62b266-b24d-47e5-ae8d-cb8524e1d628-kube-api-access-9pkwz\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.109837 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.109849 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a62b266-b24d-47e5-ae8d-cb8524e1d628-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.962593 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.964994 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2qcv" Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.965083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9dsf" event={"ID":"8fa37c4d-3105-4641-8568-f29938b5cecc","Type":"ContainerDied","Data":"7c5472457e574250ce229e71104c3d275504739870445d71eac020fa408b2be9"} Feb 18 14:25:28 crc kubenswrapper[4739]: I0218 14:25:28.965153 4739 scope.go:117] "RemoveContainer" containerID="22351a3a2397469328039c02b022a8237d7b70dc6f17d1c811f89df28961a051" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.048023 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") pod \"8fa37c4d-3105-4641-8568-f29938b5cecc\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.048249 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") pod \"8fa37c4d-3105-4641-8568-f29938b5cecc\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.048332 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") pod \"8fa37c4d-3105-4641-8568-f29938b5cecc\" (UID: \"8fa37c4d-3105-4641-8568-f29938b5cecc\") " Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.059037 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.065974 4739 scope.go:117] "RemoveContainer" containerID="a106ffa9468bacea91bf206e3ffa0e7c8fce2c895e2ec88f67739b589eca025e" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.070373 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities" (OuterVolumeSpecName: "utilities") pod "8fa37c4d-3105-4641-8568-f29938b5cecc" (UID: "8fa37c4d-3105-4641-8568-f29938b5cecc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.086067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg" (OuterVolumeSpecName: "kube-api-access-fzcpg") pod "8fa37c4d-3105-4641-8568-f29938b5cecc" (UID: "8fa37c4d-3105-4641-8568-f29938b5cecc"). InnerVolumeSpecName "kube-api-access-fzcpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.127613 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2qcv"] Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.152837 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzcpg\" (UniqueName: \"kubernetes.io/projected/8fa37c4d-3105-4641-8568-f29938b5cecc-kube-api-access-fzcpg\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.152867 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.206707 4739 scope.go:117] "RemoveContainer" containerID="4b0e7c8eb140916b6e74a21779841b69e440908d6b9c1495731308eccfead9ee" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.207202 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fa37c4d-3105-4641-8568-f29938b5cecc" (UID: "8fa37c4d-3105-4641-8568-f29938b5cecc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.259370 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa37c4d-3105-4641-8568-f29938b5cecc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:29 crc kubenswrapper[4739]: I0218 14:25:29.984877 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9dsf" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.032883 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.051444 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p9dsf"] Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.347740 4739 scope.go:117] "RemoveContainer" containerID="81f81c7066b7b4c95e8c6b6a3d0a11548cf322b1e9bf818f0a394ac79e2c2399" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.411598 4739 scope.go:117] "RemoveContainer" containerID="edabb29e619ae1eeb2b3b44d914c9284ac1c7ae85b8069685bf0ec6983667b3d" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.434527 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" path="/var/lib/kubelet/pods/0a62b266-b24d-47e5-ae8d-cb8524e1d628/volumes" Feb 18 14:25:30 crc kubenswrapper[4739]: I0218 14:25:30.438496 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" path="/var/lib/kubelet/pods/8fa37c4d-3105-4641-8568-f29938b5cecc/volumes" Feb 18 14:25:30 crc kubenswrapper[4739]: E0218 14:25:30.521281 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:31 crc kubenswrapper[4739]: I0218 14:25:31.930082 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.024642 4739 generic.go:334] "Generic (PLEG): container finished" podID="4106c506-1336-4121-a8d7-90fe333ce3df" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" exitCode=0 Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.024763 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd"} Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.025054 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4106c506-1336-4121-a8d7-90fe333ce3df","Type":"ContainerDied","Data":"238ba6fcba3c9aab1b9b714ffc70c837313da0593e88c1516a48844a82ac9503"} Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.025087 4739 scope.go:117] "RemoveContainer" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.024927 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030300 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030419 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030656 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030735 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030823 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.030877 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") pod \"4106c506-1336-4121-a8d7-90fe333ce3df\" (UID: \"4106c506-1336-4121-a8d7-90fe333ce3df\") " Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.031325 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.031484 4739 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.034874 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.050067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts" (OuterVolumeSpecName: "scripts") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.100946 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9" (OuterVolumeSpecName: "kube-api-access-77zc9") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "kube-api-access-77zc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.159572 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.159634 4739 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4106c506-1336-4121-a8d7-90fe333ce3df-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.159648 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77zc9\" (UniqueName: \"kubernetes.io/projected/4106c506-1336-4121-a8d7-90fe333ce3df-kube-api-access-77zc9\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.187748 4739 scope.go:117] "RemoveContainer" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.189406 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" containerID="cri-o://1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403" gracePeriod=604794 Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.228214 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.235749 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.235771 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" containerID="cri-o://3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d" gracePeriod=604794 Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.272989 4739 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.273031 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.296665 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data" (OuterVolumeSpecName: "config-data") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.301620 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4106c506-1336-4121-a8d7-90fe333ce3df" (UID: "4106c506-1336-4121-a8d7-90fe333ce3df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.328041 4739 scope.go:117] "RemoveContainer" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.372301 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.375687 4739 scope.go:117] "RemoveContainer" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.376006 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.376038 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4106c506-1336-4121-a8d7-90fe333ce3df-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.394159 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.428996 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" path="/var/lib/kubelet/pods/4106c506-1336-4121-a8d7-90fe333ce3df/volumes" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.429952 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430411 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430428 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430446 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430470 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430486 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430496 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430508 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430516 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430525 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430532 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430566 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430572 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430583 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430588 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="extract-content" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430604 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430610 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430625 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430631 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.430653 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430660 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="extract-utilities" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430854 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-notification-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430864 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="ceilometer-central-agent" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430881 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa37c4d-3105-4641-8568-f29938b5cecc" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430894 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a62b266-b24d-47e5-ae8d-cb8524e1d628" containerName="registry-server" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430928 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="proxy-httpd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.430943 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="4106c506-1336-4121-a8d7-90fe333ce3df" containerName="sg-core" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.441175 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.444581 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.444809 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.444834 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.464739 4739 scope.go:117] "RemoveContainer" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.465765 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be\": container with ID starting with 6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be not found: ID does not exist" containerID="6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.465796 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be"} err="failed to get container status \"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be\": rpc error: code = NotFound desc = could not find container \"6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be\": container with ID starting with 6068b502edfbf333b362a237b751b55f52b3df6b8b6091de20afa3fe9bed51be not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.465821 4739 scope.go:117] "RemoveContainer" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.466147 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7\": container with ID starting with 9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7 not found: ID does not exist" containerID="9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.466222 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7"} err="failed to get container status \"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7\": rpc error: code = NotFound desc = could not find container \"9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7\": container with ID starting with 9ecbae07abb481beb7ed7546f00a88afd810ee3a202f54fbc3fde3e2783c0ca7 not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.466274 4739 scope.go:117] "RemoveContainer" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.466917 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd\": container with ID starting with 251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd not found: ID does not exist" containerID="251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.466975 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd"} err="failed to get container status \"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd\": rpc error: code = NotFound desc = could not find container \"251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd\": container with ID starting with 251af02031b5d6fc1ca5b1c402fe7184aac678720ebb0b38e71ea10fa189d9fd not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.467020 4739 scope.go:117] "RemoveContainer" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.467398 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13\": container with ID starting with 3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13 not found: ID does not exist" containerID="3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.467429 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13"} err="failed to get container status \"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13\": rpc error: code = NotFound desc = could not find container \"3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13\": container with ID starting with 3acc3abf95715439347fbb0600de1bf6a138bda3f79939cbc4b17e105f6e5b13 not found: ID does not exist" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.471219 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588222 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-scripts\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588336 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-config-data\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588664 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.588871 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.589094 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-run-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.589599 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkdj\" (UniqueName: \"kubernetes.io/projected/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-kube-api-access-5kkdj\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.589748 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-log-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692044 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692475 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-scripts\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692534 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-config-data\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692599 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692747 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-run-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692807 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kkdj\" (UniqueName: \"kubernetes.io/projected/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-kube-api-access-5kkdj\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.692860 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-log-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.693871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-log-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.694050 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-run-httpd\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.698758 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.699412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.699916 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.700110 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-scripts\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.703245 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-config-data\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.711899 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kkdj\" (UniqueName: \"kubernetes.io/projected/2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b-kube-api-access-5kkdj\") pod \"ceilometer-0\" (UID: \"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b\") " pod="openstack/ceilometer-0" Feb 18 14:25:32 crc kubenswrapper[4739]: E0218 14:25:32.748877 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:32 crc kubenswrapper[4739]: I0218 14:25:32.760176 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 14:25:33 crc kubenswrapper[4739]: I0218 14:25:33.215006 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:25:33 crc kubenswrapper[4739]: I0218 14:25:33.286236 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: connect: connection refused" Feb 18 14:25:33 crc kubenswrapper[4739]: I0218 14:25:33.356724 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 14:25:34 crc kubenswrapper[4739]: I0218 14:25:34.057051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"ff5543db541b8d9ceb32c87a5b1108377bedd8a766d344ce85931e1103feec8e"} Feb 18 14:25:35 crc kubenswrapper[4739]: I0218 14:25:35.410688 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:35 crc kubenswrapper[4739]: E0218 14:25:35.411161 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.168161 4739 generic.go:334] "Generic (PLEG): container finished" podID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerID="1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403" exitCode=0 Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.168373 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerDied","Data":"1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403"} Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.171274 4739 generic.go:334] "Generic (PLEG): container finished" podID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerID="3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d" exitCode=0 Feb 18 14:25:39 crc kubenswrapper[4739]: I0218 14:25:39.171322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerDied","Data":"3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d"} Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.520894 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.523568 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.526821 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.545722 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632065 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632347 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632499 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.632783 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735707 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735745 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735851 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.735877 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.736017 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.736063 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.739281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741335 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741808 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.741884 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.742938 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.763461 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"dnsmasq-dns-5b75489c6f-xlgml\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:42 crc kubenswrapper[4739]: I0218 14:25:42.861082 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:43 crc kubenswrapper[4739]: E0218 14:25:43.180198 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:43 crc kubenswrapper[4739]: I0218 14:25:43.214717 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.101825 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.109132 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221370 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221418 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221514 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221542 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221570 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221595 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.221887 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.222660 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.222733 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.222760 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.228262 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.228672 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.231196 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.233919 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.235978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.236038 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info" (OuterVolumeSpecName: "pod-info") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.245416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv" (OuterVolumeSpecName: "kube-api-access-rf5kv") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "kube-api-access-rf5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.293962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f34a572d-30ca-4de5-bf27-3371e1e9d197","Type":"ContainerDied","Data":"d4d2f4d954b6b105d9d4d012df3327d247d4b0d91bb0c3076d3bbe9f637b4cc0"} Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.294010 4739 scope.go:117] "RemoveContainer" containerID="3228467af95ce70d1ea7ebd3cd207c3fd6c54c75409aecf8eea728d75488502d" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.294159 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.324717 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091" (OuterVolumeSpecName: "persistence") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.325351 4739 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json]: open /var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"f34a572d-30ca-4de5-bf27-3371e1e9d197\" (UID: \"f34a572d-30ca-4de5-bf27-3371e1e9d197\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json]: open /var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes/kubernetes.io~csi/pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091/vol_data.json: no such file or directory" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.327344 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data" (OuterVolumeSpecName: "config-data") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.330291 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f34a572d-30ca-4de5-bf27-3371e1e9d197-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.331740 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.331854 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf5kv\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-kube-api-access-rf5kv\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.331924 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332407 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332514 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332579 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332755 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f34a572d-30ca-4de5-bf27-3371e1e9d197-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.332858 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") on node \"crc\" " Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.348926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf" (OuterVolumeSpecName: "server-conf") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.414924 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.415107 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091") on node "crc" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.435540 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f34a572d-30ca-4de5-bf27-3371e1e9d197-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.436132 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.481516 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f34a572d-30ca-4de5-bf27-3371e1e9d197" (UID: "f34a572d-30ca-4de5-bf27-3371e1e9d197"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.539050 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f34a572d-30ca-4de5-bf27-3371e1e9d197-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.648563 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.674005 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.693856 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.694648 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="setup-container" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.694661 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="setup-container" Feb 18 14:25:45 crc kubenswrapper[4739]: E0218 14:25:45.694703 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.694709 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.694940 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.696200 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.700281 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.700618 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701045 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701334 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvn4l" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701597 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.701841 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.706250 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.714830 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749349 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749760 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5hv\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-kube-api-access-5g5hv\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749941 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.749985 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.750003 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.750070 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.750130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852290 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852364 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g5hv\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-kube-api-access-5g5hv\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852494 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852528 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852915 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.852919 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853633 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853712 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853869 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.853974 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.854053 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.854502 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.857062 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.857195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.857202 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.859644 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.859748 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b4e22e9c66b4b9e31fc01977dfa2f505609dd5b0e95d61de241c54ade9d7a505/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.860134 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.860642 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.861612 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.865541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.873005 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g5hv\" (UniqueName: \"kubernetes.io/projected/c71b6fb5-d59d-479d-b3fc-996d14bd93ed-kube-api-access-5g5hv\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:45 crc kubenswrapper[4739]: I0218 14:25:45.923587 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-122ec1a9-ed4d-4136-8bac-676b4fca0091\") pod \"rabbitmq-cell1-server-0\" (UID: \"c71b6fb5-d59d-479d-b3fc-996d14bd93ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:46 crc kubenswrapper[4739]: I0218 14:25:46.039307 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:25:46 crc kubenswrapper[4739]: I0218 14:25:46.430498 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" path="/var/lib/kubelet/pods/f34a572d-30ca-4de5-bf27-3371e1e9d197/volumes" Feb 18 14:25:48 crc kubenswrapper[4739]: E0218 14:25:48.265747 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:48 crc kubenswrapper[4739]: E0218 14:25:48.265814 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:48 crc kubenswrapper[4739]: I0218 14:25:48.285201 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f34a572d-30ca-4de5-bf27-3371e1e9d197" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Feb 18 14:25:49 crc kubenswrapper[4739]: I0218 14:25:49.410746 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:25:49 crc kubenswrapper[4739]: E0218 14:25:49.411266 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.028763 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.028835 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.028975 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62h27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-zq8vc_openstack(6e0a952f-ef12-46c6-8ca8-10f016b441be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.030173 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-zq8vc" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" Feb 18 14:25:50 crc kubenswrapper[4739]: E0218 14:25:50.353340 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-zq8vc" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.656843 4739 scope.go:117] "RemoveContainer" containerID="a716eae534567c7eacf310c551635181608ae4e159e2fd3e991903215040cab2" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.795988 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899675 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899763 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899826 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899895 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.899945 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900022 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900100 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900207 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.900237 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.901178 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.903811 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.908040 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") pod \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\" (UID: \"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b\") " Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.909373 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.914154 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.919189 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.942765 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.955696 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz" (OuterVolumeSpecName: "kube-api-access-vbxbz") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "kube-api-access-vbxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.956013 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.957836 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info" (OuterVolumeSpecName: "pod-info") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:25:51 crc kubenswrapper[4739]: I0218 14:25:51.965031 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c" (OuterVolumeSpecName: "persistence") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.016998 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017037 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017059 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") on node \"crc\" " Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017071 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017081 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017091 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbxbz\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-kube-api-access-vbxbz\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.017188 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.021982 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data" (OuterVolumeSpecName: "config-data") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.060713 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf" (OuterVolumeSpecName: "server-conf") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.086170 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.086516 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c") on node "crc" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.123651 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.123694 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.123707 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.176432 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.176641 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" (UID: "846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.225575 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.380395 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b","Type":"ContainerDied","Data":"a323ec96e46e55ecd38a675963f8fb957be29188446c4c0701ca364f77566a1b"} Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.380604 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.432709 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.460938 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.474903 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.475553 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="setup-container" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.475580 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="setup-container" Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.475616 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.475625 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.475946 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" containerName="rabbitmq" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.478136 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.501289 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.636975 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637089 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83da58fc-6d28-4a56-abc1-00267082c6b6-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5n5w\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-kube-api-access-p5n5w\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637300 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-server-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83da58fc-6d28-4a56-abc1-00267082c6b6-pod-info\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-config-data\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637494 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637518 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.637563 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: W0218 14:25:52.668173 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb44bafed_1808_41fc_b2bb_fcd2f1f02a17.slice/crio-330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6 WatchSource:0}: Error finding container 330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6: Status 404 returned error can't find the container with id 330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6 Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.691106 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.691157 4739 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 14:25:52 crc kubenswrapper[4739]: E0218 14:25:52.691282 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66ch94h6ch565h664h77h658hcch5b5h66ch86h5dfh85h5d6h576hd7hc4h544h587h649hb8h64ch86h5b9h597h677h59bhcch89h667h5b6h674q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kkdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.712851 4739 scope.go:117] "RemoveContainer" containerID="1196a1e6460811c94c46f39dbe0fd6c6f691e4c8c02027977bcbe32e7ab65403" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.740676 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-config-data\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-config-data\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741636 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.741994 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742208 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742309 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83da58fc-6d28-4a56-abc1-00267082c6b6-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742400 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742490 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5n5w\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-kube-api-access-p5n5w\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742579 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-server-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.742670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83da58fc-6d28-4a56-abc1-00267082c6b6-pod-info\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.743403 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.743684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.743725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.744421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83da58fc-6d28-4a56-abc1-00267082c6b6-server-conf\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.745959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83da58fc-6d28-4a56-abc1-00267082c6b6-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.747099 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83da58fc-6d28-4a56-abc1-00267082c6b6-pod-info\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.749912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.758572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.764942 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.764996 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/42f2352e597643fb9091206ae40b48fcb025360f730dba5ba00ebee7f81842b7/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.776120 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5n5w\" (UniqueName: \"kubernetes.io/projected/83da58fc-6d28-4a56-abc1-00267082c6b6-kube-api-access-p5n5w\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.852927 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0ea0071b-6ff0-4534-be32-a7c78de6646c\") pod \"rabbitmq-server-2\" (UID: \"83da58fc-6d28-4a56-abc1-00267082c6b6\") " pod="openstack/rabbitmq-server-2" Feb 18 14:25:52 crc kubenswrapper[4739]: I0218 14:25:52.874588 4739 scope.go:117] "RemoveContainer" containerID="aca2d7cf6c996ecda1b70039221c80c30560394fd55fdc793dfd46773ab29a77" Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.101543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.247135 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.394418 4739 generic.go:334] "Generic (PLEG): container finished" podID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerID="540b32810564c1395af833055ad23799a4b1a66b7693eafbe7c3cebb7f686098" exitCode=0 Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.394795 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerDied","Data":"540b32810564c1395af833055ad23799a4b1a66b7693eafbe7c3cebb7f686098"} Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.394827 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerStarted","Data":"330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6"} Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.400531 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerStarted","Data":"a0e467492a9d509677a9a2ce5bfb03daf177f33b7ad5e3a75510348a76449f90"} Feb 18 14:25:53 crc kubenswrapper[4739]: E0218 14:25:53.563686 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache]" Feb 18 14:25:53 crc kubenswrapper[4739]: I0218 14:25:53.640904 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.425099 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b" path="/var/lib/kubelet/pods/846b1cf2-bffb-4eca-a8f2-f3c0fcc7ac4b/volumes" Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427202 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerStarted","Data":"cff9d9bb2d51c4be81fd339ad6c53fe9f2e85c7e962f244b119134a8ef83ff99"} Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"aaab1c29ca5a9641b89b3702969fcc58211756abc60eeb4909036a0cbf64a830"} Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427265 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerStarted","Data":"44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d"} Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.427296 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:25:54 crc kubenswrapper[4739]: I0218 14:25:54.449540 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" podStartSLOduration=12.449523239 podStartE2EDuration="12.449523239s" podCreationTimestamp="2026-02-18 14:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:25:54.44159134 +0000 UTC m=+1586.937312272" watchObservedRunningTime="2026-02-18 14:25:54.449523239 +0000 UTC m=+1586.945244161" Feb 18 14:25:55 crc kubenswrapper[4739]: I0218 14:25:55.441878 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerStarted","Data":"9c40a962e22b100be23a7a0163ebcb66d15c4bd51bb227f4c767cbf6c58812d0"} Feb 18 14:25:56 crc kubenswrapper[4739]: I0218 14:25:56.458414 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"a7cd61cee84e63df9331a9f85d1b2cfa167e94f3ff8dd7c7a78e021305137855"} Feb 18 14:25:56 crc kubenswrapper[4739]: I0218 14:25:56.461319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerStarted","Data":"109a1d01b2b388822b4017533289f525bb0875693261feeb825b93643fe2bf46"} Feb 18 14:25:59 crc kubenswrapper[4739]: E0218 14:25:59.893535 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" Feb 18 14:26:00 crc kubenswrapper[4739]: E0218 14:26:00.356888 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:00 crc kubenswrapper[4739]: I0218 14:26:00.522018 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"9e20d9cae3babc8c64d126e1fd80af304a9f344aba078a57ae3836ac23fe1ccb"} Feb 18 14:26:00 crc kubenswrapper[4739]: I0218 14:26:00.523509 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 14:26:00 crc kubenswrapper[4739]: E0218 14:26:00.524522 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" Feb 18 14:26:01 crc kubenswrapper[4739]: I0218 14:26:01.411013 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:01 crc kubenswrapper[4739]: E0218 14:26:01.411756 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:01 crc kubenswrapper[4739]: E0218 14:26:01.538660 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" Feb 18 14:26:02 crc kubenswrapper[4739]: I0218 14:26:02.862649 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:26:02 crc kubenswrapper[4739]: I0218 14:26:02.919997 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:26:02 crc kubenswrapper[4739]: I0218 14:26:02.920225 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" containerID="cri-o://6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b" gracePeriod=10 Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.248841 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hd9ps"] Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.253015 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.287573 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hd9ps"] Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-config\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376125 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376178 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376209 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376256 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb96t\" (UniqueName: \"kubernetes.io/projected/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-kube-api-access-zb96t\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.376282 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.479988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-config\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480128 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480179 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480230 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb96t\" (UniqueName: \"kubernetes.io/projected/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-kube-api-access-zb96t\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.480326 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.491404 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-config\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.496108 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.497058 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.499547 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.500206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.520871 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.539716 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb96t\" (UniqueName: \"kubernetes.io/projected/703ba4cc-fc0d-4adf-bb13-62fecb68cff7-kube-api-access-zb96t\") pod \"dnsmasq-dns-5d75f767dc-hd9ps\" (UID: \"703ba4cc-fc0d-4adf-bb13-62fecb68cff7\") " pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.568727 4739 generic.go:334] "Generic (PLEG): container finished" podID="107ff6da-f0af-471c-bfaf-08364992c44e" containerID="6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b" exitCode=0 Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.568819 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerDied","Data":"6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b"} Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.588464 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.589836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerStarted","Data":"03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862"} Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.607096 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-zq8vc" podStartSLOduration=1.937988382 podStartE2EDuration="40.60707409s" podCreationTimestamp="2026-02-18 14:25:23 +0000 UTC" firstStartedPulling="2026-02-18 14:25:23.899414832 +0000 UTC m=+1556.395135754" lastFinishedPulling="2026-02-18 14:26:02.56850054 +0000 UTC m=+1595.064221462" observedRunningTime="2026-02-18 14:26:03.60586713 +0000 UTC m=+1596.101588052" watchObservedRunningTime="2026-02-18 14:26:03.60707409 +0000 UTC m=+1596.102795002" Feb 18 14:26:03 crc kubenswrapper[4739]: E0218 14:26:03.649928 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.860571 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.991781 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.991824 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.991964 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.992012 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.992084 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:03 crc kubenswrapper[4739]: I0218 14:26:03.993068 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") pod \"107ff6da-f0af-471c-bfaf-08364992c44e\" (UID: \"107ff6da-f0af-471c-bfaf-08364992c44e\") " Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.011756 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj" (OuterVolumeSpecName: "kube-api-access-bhzzj") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "kube-api-access-bhzzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.066412 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.070589 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.071456 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config" (OuterVolumeSpecName: "config") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.079646 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098386 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhzzj\" (UniqueName: \"kubernetes.io/projected/107ff6da-f0af-471c-bfaf-08364992c44e-kube-api-access-bhzzj\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098434 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098475 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098487 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.098501 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.133876 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "107ff6da-f0af-471c-bfaf-08364992c44e" (UID: "107ff6da-f0af-471c-bfaf-08364992c44e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.200360 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/107ff6da-f0af-471c-bfaf-08364992c44e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:04 crc kubenswrapper[4739]: W0218 14:26:04.264898 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod703ba4cc_fc0d_4adf_bb13_62fecb68cff7.slice/crio-c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb WatchSource:0}: Error finding container c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb: Status 404 returned error can't find the container with id c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.267909 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-hd9ps"] Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.623966 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" event={"ID":"107ff6da-f0af-471c-bfaf-08364992c44e","Type":"ContainerDied","Data":"de253019cab38f430ba5baf38246bca706fcc962369cf21cb7d0dd554226a189"} Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.624018 4739 scope.go:117] "RemoveContainer" containerID="6d1fa176139b49aa3f7f2787ae66d435ca3eb9a294abfbc4eac9b73d793efd8b" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.624324 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-8x5jn" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.631135 4739 generic.go:334] "Generic (PLEG): container finished" podID="703ba4cc-fc0d-4adf-bb13-62fecb68cff7" containerID="98d1809038cf13a45c6ba78f1f6327a486ba6d0c214ebfd42b91cf4f479624a4" exitCode=0 Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.631176 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" event={"ID":"703ba4cc-fc0d-4adf-bb13-62fecb68cff7","Type":"ContainerDied","Data":"98d1809038cf13a45c6ba78f1f6327a486ba6d0c214ebfd42b91cf4f479624a4"} Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.631202 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" event={"ID":"703ba4cc-fc0d-4adf-bb13-62fecb68cff7","Type":"ContainerStarted","Data":"c55efd4826e71b6188592be5407eec8201b186e25c4100dd43bf4c4a245597fb"} Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.666743 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.668924 4739 scope.go:117] "RemoveContainer" containerID="0fa795a89771ccc792842d737411fc77aacef89807fe0ac39f6e7b6973469e7a" Feb 18 14:26:04 crc kubenswrapper[4739]: I0218 14:26:04.692380 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-8x5jn"] Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.658836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" event={"ID":"703ba4cc-fc0d-4adf-bb13-62fecb68cff7","Type":"ContainerStarted","Data":"c5166190b91acc9de964e3359fd2d81b6451f09adb70bba097a96fc40c919a96"} Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.660977 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.668830 4739 generic.go:334] "Generic (PLEG): container finished" podID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerID="03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862" exitCode=0 Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.668930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerDied","Data":"03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862"} Feb 18 14:26:05 crc kubenswrapper[4739]: I0218 14:26:05.699275 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" podStartSLOduration=2.699247429 podStartE2EDuration="2.699247429s" podCreationTimestamp="2026-02-18 14:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:05.694693254 +0000 UTC m=+1598.190414176" watchObservedRunningTime="2026-02-18 14:26:05.699247429 +0000 UTC m=+1598.194968361" Feb 18 14:26:06 crc kubenswrapper[4739]: I0218 14:26:06.423714 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" path="/var/lib/kubelet/pods/107ff6da-f0af-471c-bfaf-08364992c44e/volumes" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.197739 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.285592 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") pod \"6e0a952f-ef12-46c6-8ca8-10f016b441be\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.285799 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") pod \"6e0a952f-ef12-46c6-8ca8-10f016b441be\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.285882 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") pod \"6e0a952f-ef12-46c6-8ca8-10f016b441be\" (UID: \"6e0a952f-ef12-46c6-8ca8-10f016b441be\") " Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.291832 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27" (OuterVolumeSpecName: "kube-api-access-62h27") pod "6e0a952f-ef12-46c6-8ca8-10f016b441be" (UID: "6e0a952f-ef12-46c6-8ca8-10f016b441be"). InnerVolumeSpecName "kube-api-access-62h27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.340387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e0a952f-ef12-46c6-8ca8-10f016b441be" (UID: "6e0a952f-ef12-46c6-8ca8-10f016b441be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.374594 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data" (OuterVolumeSpecName: "config-data") pod "6e0a952f-ef12-46c6-8ca8-10f016b441be" (UID: "6e0a952f-ef12-46c6-8ca8-10f016b441be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.394463 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62h27\" (UniqueName: \"kubernetes.io/projected/6e0a952f-ef12-46c6-8ca8-10f016b441be-kube-api-access-62h27\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.394493 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.394503 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0a952f-ef12-46c6-8ca8-10f016b441be-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.700082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-zq8vc" event={"ID":"6e0a952f-ef12-46c6-8ca8-10f016b441be","Type":"ContainerDied","Data":"254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a"} Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.700135 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254128b8b4776a8e196ceddf4f74f11d413bddfc79aebb13e55002e6ac9d1d0a" Feb 18 14:26:07 crc kubenswrapper[4739]: I0218 14:26:07.700583 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-zq8vc" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.713172 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5957545cb-6lrc2"] Feb 18 14:26:08 crc kubenswrapper[4739]: E0218 14:26:08.713990 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714007 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" Feb 18 14:26:08 crc kubenswrapper[4739]: E0218 14:26:08.714035 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerName="heat-db-sync" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714041 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerName="heat-db-sync" Feb 18 14:26:08 crc kubenswrapper[4739]: E0218 14:26:08.714056 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="init" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714061 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="init" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714260 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" containerName="heat-db-sync" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.714292 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="107ff6da-f0af-471c-bfaf-08364992c44e" containerName="dnsmasq-dns" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.715057 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.729282 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5957545cb-6lrc2"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.775346 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5cfc6d5787-cxgnr"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.778843 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.794083 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cfc6d5787-cxgnr"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.825801 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8dd984b75-2cjs7"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.827310 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-combined-ca-bundle\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831858 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xcv\" (UniqueName: \"kubernetes.io/projected/26539513-f274-471e-ad4a-10bcd4758458-kube-api-access-98xcv\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831925 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjfj\" (UniqueName: \"kubernetes.io/projected/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-kube-api-access-qvjfj\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.831953 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data-custom\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-internal-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832069 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832093 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-public-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832167 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-combined-ca-bundle\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.832214 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data-custom\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.854431 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8dd984b75-2cjs7"] Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data-custom\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934263 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-combined-ca-bundle\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934296 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98xcv\" (UniqueName: \"kubernetes.io/projected/26539513-f274-471e-ad4a-10bcd4758458-kube-api-access-98xcv\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934319 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-internal-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934397 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjfj\" (UniqueName: \"kubernetes.io/projected/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-kube-api-access-qvjfj\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-combined-ca-bundle\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934465 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-public-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934482 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data-custom\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934536 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-internal-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934560 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934615 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-public-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934670 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-combined-ca-bundle\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sv7w\" (UniqueName: \"kubernetes.io/projected/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-kube-api-access-7sv7w\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.934746 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data-custom\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.941300 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.942007 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-public-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.942424 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-combined-ca-bundle\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.943514 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26539513-f274-471e-ad4a-10bcd4758458-config-data-custom\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.943899 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.944967 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-config-data-custom\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.951306 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-internal-tls-certs\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.954610 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjfj\" (UniqueName: \"kubernetes.io/projected/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-kube-api-access-qvjfj\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.957326 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c65abc8-9ca5-4a28-89d7-f5ffe23d1040-combined-ca-bundle\") pod \"heat-api-5cfc6d5787-cxgnr\" (UID: \"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040\") " pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:08 crc kubenswrapper[4739]: I0218 14:26:08.957935 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98xcv\" (UniqueName: \"kubernetes.io/projected/26539513-f274-471e-ad4a-10bcd4758458-kube-api-access-98xcv\") pod \"heat-engine-5957545cb-6lrc2\" (UID: \"26539513-f274-471e-ad4a-10bcd4758458\") " pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.035538 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037101 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data-custom\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037187 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037226 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-internal-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037262 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-combined-ca-bundle\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-public-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.037423 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sv7w\" (UniqueName: \"kubernetes.io/projected/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-kube-api-access-7sv7w\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.042206 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-internal-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.047680 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.055281 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-public-tls-certs\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.055301 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-config-data-custom\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.055549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-combined-ca-bundle\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.059547 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sv7w\" (UniqueName: \"kubernetes.io/projected/ecd1f6fa-009d-4942-98ad-203c31a7bf5b-kube-api-access-7sv7w\") pod \"heat-cfnapi-8dd984b75-2cjs7\" (UID: \"ecd1f6fa-009d-4942-98ad-203c31a7bf5b\") " pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.121534 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.152703 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:09 crc kubenswrapper[4739]: W0218 14:26:09.587706 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26539513_f274_471e_ad4a_10bcd4758458.slice/crio-20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29 WatchSource:0}: Error finding container 20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29: Status 404 returned error can't find the container with id 20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29 Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.594038 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5957545cb-6lrc2"] Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.721377 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5957545cb-6lrc2" event={"ID":"26539513-f274-471e-ad4a-10bcd4758458","Type":"ContainerStarted","Data":"20d25184e087a0f8edeea8b121d05ef712c75c5140f848c2fb0a00c0c47c3f29"} Feb 18 14:26:09 crc kubenswrapper[4739]: W0218 14:26:09.728309 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c65abc8_9ca5_4a28_89d7_f5ffe23d1040.slice/crio-c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08 WatchSource:0}: Error finding container c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08: Status 404 returned error can't find the container with id c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08 Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.732108 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5cfc6d5787-cxgnr"] Feb 18 14:26:09 crc kubenswrapper[4739]: I0218 14:26:09.883683 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8dd984b75-2cjs7"] Feb 18 14:26:09 crc kubenswrapper[4739]: W0218 14:26:09.887197 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd1f6fa_009d_4942_98ad_203c31a7bf5b.slice/crio-5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1 WatchSource:0}: Error finding container 5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1: Status 404 returned error can't find the container with id 5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1 Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.733911 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5957545cb-6lrc2" event={"ID":"26539513-f274-471e-ad4a-10bcd4758458","Type":"ContainerStarted","Data":"d606185937500eb2bee6a25d8b0ad1d7609bc85021a0104784b6ed19160a4d25"} Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.734041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.735810 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfc6d5787-cxgnr" event={"ID":"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040","Type":"ContainerStarted","Data":"c4f01b6bc2775d69d72467f0eedb94b8633a59f1fc4446808c2a8fa25cb2fb08"} Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.740372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" event={"ID":"ecd1f6fa-009d-4942-98ad-203c31a7bf5b","Type":"ContainerStarted","Data":"5e11b45b876f04f731c74c59c5e7f0a906e1af8f45937ab9a072f46072bf3bb1"} Feb 18 14:26:10 crc kubenswrapper[4739]: I0218 14:26:10.768892 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5957545cb-6lrc2" podStartSLOduration=2.768869988 podStartE2EDuration="2.768869988s" podCreationTimestamp="2026-02-18 14:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:10.757916113 +0000 UTC m=+1603.253637045" watchObservedRunningTime="2026-02-18 14:26:10.768869988 +0000 UTC m=+1603.264590920" Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.768879 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5cfc6d5787-cxgnr" event={"ID":"9c65abc8-9ca5-4a28-89d7-f5ffe23d1040","Type":"ContainerStarted","Data":"6ec1ba72394816ead088c0f4d2300b7976df7abc9aaed5c94058025f5a5abb8f"} Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.769485 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.776195 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" event={"ID":"ecd1f6fa-009d-4942-98ad-203c31a7bf5b","Type":"ContainerStarted","Data":"0d27966de4de938ab655c7b7bd9b35921570d1b746f3453221cfdd6cdaaea4ce"} Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.791768 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5cfc6d5787-cxgnr" podStartSLOduration=2.205920109 podStartE2EDuration="3.791742813s" podCreationTimestamp="2026-02-18 14:26:08 +0000 UTC" firstStartedPulling="2026-02-18 14:26:09.731074407 +0000 UTC m=+1602.226795329" lastFinishedPulling="2026-02-18 14:26:11.316897111 +0000 UTC m=+1603.812618033" observedRunningTime="2026-02-18 14:26:11.78489464 +0000 UTC m=+1604.280615572" watchObservedRunningTime="2026-02-18 14:26:11.791742813 +0000 UTC m=+1604.287463735" Feb 18 14:26:11 crc kubenswrapper[4739]: I0218 14:26:11.817947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" podStartSLOduration=2.391550832 podStartE2EDuration="3.817926342s" podCreationTimestamp="2026-02-18 14:26:08 +0000 UTC" firstStartedPulling="2026-02-18 14:26:09.890569732 +0000 UTC m=+1602.386290654" lastFinishedPulling="2026-02-18 14:26:11.316945242 +0000 UTC m=+1603.812666164" observedRunningTime="2026-02-18 14:26:11.812081465 +0000 UTC m=+1604.307802417" watchObservedRunningTime="2026-02-18 14:26:11.817926342 +0000 UTC m=+1604.313647264" Feb 18 14:26:12 crc kubenswrapper[4739]: I0218 14:26:12.785158 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.589647 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-hd9ps" Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.659984 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.660262 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" containerID="cri-o://44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d" gracePeriod=10 Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.807951 4739 generic.go:334] "Generic (PLEG): container finished" podID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerID="44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d" exitCode=0 Feb 18 14:26:13 crc kubenswrapper[4739]: I0218 14:26:13.808151 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerDied","Data":"44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d"} Feb 18 14:26:14 crc kubenswrapper[4739]: E0218 14:26:14.037934 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.219919 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.314701 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315168 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315433 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315544 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315569 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315617 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.315659 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") pod \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\" (UID: \"b44bafed-1808-41fc-b2bb-fcd2f1f02a17\") " Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.329067 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q" (OuterVolumeSpecName: "kube-api-access-zgn4q") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "kube-api-access-zgn4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.389185 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.391545 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.391595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config" (OuterVolumeSpecName: "config") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.391671 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.408391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.410545 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b44bafed-1808-41fc-b2bb-fcd2f1f02a17" (UID: "b44bafed-1808-41fc-b2bb-fcd2f1f02a17"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.410644 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:14 crc kubenswrapper[4739]: E0218 14:26:14.410905 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.418918 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.418960 4739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-config\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.418998 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgn4q\" (UniqueName: \"kubernetes.io/projected/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-kube-api-access-zgn4q\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419012 4739 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419022 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419032 4739 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.419040 4739 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b44bafed-1808-41fc-b2bb-fcd2f1f02a17-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.822437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" event={"ID":"b44bafed-1808-41fc-b2bb-fcd2f1f02a17","Type":"ContainerDied","Data":"330f6a433a08cd27da7dffd6b8364dcfdd6172336292c14a7b18e098f0eac2e6"} Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.822516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-xlgml" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.822518 4739 scope.go:117] "RemoveContainer" containerID="44feab0d878b49b40c9f78094ee6d7d5fb8f3aacd6959e36e5fce0d47077102d" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.851188 4739 scope.go:117] "RemoveContainer" containerID="540b32810564c1395af833055ad23799a4b1a66b7693eafbe7c3cebb7f686098" Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.856739 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:26:14 crc kubenswrapper[4739]: I0218 14:26:14.870602 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-xlgml"] Feb 18 14:26:15 crc kubenswrapper[4739]: E0218 14:26:15.082844 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:15 crc kubenswrapper[4739]: I0218 14:26:15.427871 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 14:26:16 crc kubenswrapper[4739]: I0218 14:26:16.426793 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" path="/var/lib/kubelet/pods/b44bafed-1808-41fc-b2bb-fcd2f1f02a17/volumes" Feb 18 14:26:16 crc kubenswrapper[4739]: I0218 14:26:16.855137 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b"} Feb 18 14:26:16 crc kubenswrapper[4739]: I0218 14:26:16.904505 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.667097418 podStartE2EDuration="44.904479658s" podCreationTimestamp="2026-02-18 14:25:32 +0000 UTC" firstStartedPulling="2026-02-18 14:25:33.373205771 +0000 UTC m=+1565.868926693" lastFinishedPulling="2026-02-18 14:26:15.610588011 +0000 UTC m=+1608.106308933" observedRunningTime="2026-02-18 14:26:16.878716399 +0000 UTC m=+1609.374437341" watchObservedRunningTime="2026-02-18 14:26:16.904479658 +0000 UTC m=+1609.400200580" Feb 18 14:26:19 crc kubenswrapper[4739]: I0218 14:26:19.088216 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5957545cb-6lrc2" Feb 18 14:26:19 crc kubenswrapper[4739]: I0218 14:26:19.162381 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:26:19 crc kubenswrapper[4739]: I0218 14:26:19.163759 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" containerID="cri-o://783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" gracePeriod=60 Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.602823 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5cfc6d5787-cxgnr" Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.677060 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.677281 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-59f4cc7b48-2kzkr" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" containerID="cri-o://12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211" gracePeriod=60 Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.778255 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-8dd984b75-2cjs7" Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.857142 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:26:20 crc kubenswrapper[4739]: I0218 14:26:20.857326 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" containerID="cri-o://35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" gracePeriod=60 Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.124939 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:21 crc kubenswrapper[4739]: E0218 14:26:21.125472 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.125488 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" Feb 18 14:26:21 crc kubenswrapper[4739]: E0218 14:26:21.125522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="init" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.125529 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="init" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.125757 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44bafed-1808-41fc-b2bb-fcd2f1f02a17" containerName="dnsmasq-dns" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.127537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.199055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.199312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.199404 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.216613 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302013 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302090 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302218 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302728 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.302939 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.362736 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"community-operators-8nfdw\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.446336 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:21 crc kubenswrapper[4739]: W0218 14:26:21.955654 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96072604_db66_4bc5_98a7_c62c2d76eb40.slice/crio-1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70 WatchSource:0}: Error finding container 1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70: Status 404 returned error can't find the container with id 1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70 Feb 18 14:26:21 crc kubenswrapper[4739]: I0218 14:26:21.958932 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:22 crc kubenswrapper[4739]: I0218 14:26:22.925203 4739 generic.go:334] "Generic (PLEG): container finished" podID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerID="759a170bc779a35f3b7259369c90f0aabe4f5a98e1cd13a17bb561eef1c0e510" exitCode=0 Feb 18 14:26:22 crc kubenswrapper[4739]: I0218 14:26:22.925253 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"759a170bc779a35f3b7259369c90f0aabe4f5a98e1cd13a17bb561eef1c0e510"} Feb 18 14:26:22 crc kubenswrapper[4739]: I0218 14:26:22.925283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerStarted","Data":"1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70"} Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.064664 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.075847 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-xg8g2"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.192944 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.195044 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.197664 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.206174 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.355356 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.355721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.355758 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.356243 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.458717 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.458791 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.458979 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.459132 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.465265 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.465987 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.466677 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.478248 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"aodh-db-sync-k8bxr\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.515857 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.942773 4739 generic.go:334] "Generic (PLEG): container finished" podID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerID="12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211" exitCode=0 Feb 18 14:26:23 crc kubenswrapper[4739]: I0218 14:26:23.943034 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerDied","Data":"12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.068807 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.340348 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408524 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408650 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408690 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408773 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408910 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.408953 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") pod \"40d4949b-6d9f-425e-b02f-d8caa727ed99\" (UID: \"40d4949b-6d9f-425e-b02f-d8caa727ed99\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.433724 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.449574 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1543620e-d684-4634-ba89-662f02f2b0e4" path="/var/lib/kubelet/pods/1543620e-d684-4634-ba89-662f02f2b0e4/volumes" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.466255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z" (OuterVolumeSpecName: "kube-api-access-4br8z") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "kube-api-access-4br8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.519230 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.519582 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4br8z\" (UniqueName: \"kubernetes.io/projected/40d4949b-6d9f-425e-b02f-d8caa727ed99-kube-api-access-4br8z\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.535659 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: E0218 14:26:24.558867 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice/crio-2e5e6947aea8d7966344adc1bf418e53f5bbe758932ef9f4e574527d50971c51\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418a2d42_e21e_4d0d_b295_3178e079431c.slice/crio-35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a62b266_b24d_47e5_ae8d_cb8524e1d628.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod418a2d42_e21e_4d0d_b295_3178e079431c.slice/crio-conmon-35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.599823 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.652098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.657338 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data" (OuterVolumeSpecName: "config-data") pod "40d4949b-6d9f-425e-b02f-d8caa727ed99" (UID: "40d4949b-6d9f-425e-b02f-d8caa727ed99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.659512 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.659537 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.659547 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.762072 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40d4949b-6d9f-425e-b02f-d8caa727ed99-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.861604 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.961962 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerStarted","Data":"a9b6431a1e4c3fdb163f771f15f65db97a8f232887dad7bee508d0c10d0724b9"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.964248 4739 generic.go:334] "Generic (PLEG): container finished" podID="418a2d42-e21e-4d0d-b295-3178e079431c" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" exitCode=0 Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965221 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965530 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965575 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965598 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.965641 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.967269 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.968358 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerDied","Data":"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.968437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84d894dcf4-4xbcm" event={"ID":"418a2d42-e21e-4d0d-b295-3178e079431c","Type":"ContainerDied","Data":"a742c3494bc51e899a5c01b6b095653da1f5cc7a599a99cd559cc59388b29eb4"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.968475 4739 scope.go:117] "RemoveContainer" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.977178 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59f4cc7b48-2kzkr" event={"ID":"40d4949b-6d9f-425e-b02f-d8caa727ed99","Type":"ContainerDied","Data":"182afb94ab91cf9899a4110a4be4e76e5c04c7d5630670036fcfd2f21cbc8a5f"} Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.977297 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59f4cc7b48-2kzkr" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.978499 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h" (OuterVolumeSpecName: "kube-api-access-7hc2h") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "kube-api-access-7hc2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:24 crc kubenswrapper[4739]: I0218 14:26:24.981302 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.031755 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.049357 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-59f4cc7b48-2kzkr"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.056434 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.068115 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data" (OuterVolumeSpecName: "config-data") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.068608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") pod \"418a2d42-e21e-4d0d-b295-3178e079431c\" (UID: \"418a2d42-e21e-4d0d-b295-3178e079431c\") " Feb 18 14:26:25 crc kubenswrapper[4739]: W0218 14:26:25.068761 4739 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/418a2d42-e21e-4d0d-b295-3178e079431c/volumes/kubernetes.io~secret/config-data Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.068771 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data" (OuterVolumeSpecName: "config-data") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070909 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070934 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070944 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.070954 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hc2h\" (UniqueName: \"kubernetes.io/projected/418a2d42-e21e-4d0d-b295-3178e079431c-kube-api-access-7hc2h\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.075615 4739 scope.go:117] "RemoveContainer" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" Feb 18 14:26:25 crc kubenswrapper[4739]: E0218 14:26:25.076263 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467\": container with ID starting with 35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467 not found: ID does not exist" containerID="35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.076380 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467"} err="failed to get container status \"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467\": rpc error: code = NotFound desc = could not find container \"35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467\": container with ID starting with 35887257ed712f8d344e0956b8dd91e0fc505a578a222fd6cfcb69a0a0614467 not found: ID does not exist" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.076408 4739 scope.go:117] "RemoveContainer" containerID="12eea8fb9fe4ae7ff2a3c678dc4bd3905eb6fb61a72f8c583710252b1c05d211" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.080919 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.093560 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "418a2d42-e21e-4d0d-b295-3178e079431c" (UID: "418a2d42-e21e-4d0d-b295-3178e079431c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.173057 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.173128 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418a2d42-e21e-4d0d-b295-3178e079431c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.321168 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.332060 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84d894dcf4-4xbcm"] Feb 18 14:26:25 crc kubenswrapper[4739]: I0218 14:26:25.999073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerStarted","Data":"0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429"} Feb 18 14:26:26 crc kubenswrapper[4739]: I0218 14:26:26.428949 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" path="/var/lib/kubelet/pods/40d4949b-6d9f-425e-b02f-d8caa727ed99/volumes" Feb 18 14:26:26 crc kubenswrapper[4739]: I0218 14:26:26.429692 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" path="/var/lib/kubelet/pods/418a2d42-e21e-4d0d-b295-3178e079431c/volumes" Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.017420 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.020234 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.023417 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:28 crc kubenswrapper[4739]: E0218 14:26:28.023686 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:28 crc kubenswrapper[4739]: I0218 14:26:28.027627 4739 generic.go:334] "Generic (PLEG): container finished" podID="c71b6fb5-d59d-479d-b3fc-996d14bd93ed" containerID="9c40a962e22b100be23a7a0163ebcb66d15c4bd51bb227f4c767cbf6c58812d0" exitCode=0 Feb 18 14:26:28 crc kubenswrapper[4739]: I0218 14:26:28.027720 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerDied","Data":"9c40a962e22b100be23a7a0163ebcb66d15c4bd51bb227f4c767cbf6c58812d0"} Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.040994 4739 generic.go:334] "Generic (PLEG): container finished" podID="83da58fc-6d28-4a56-abc1-00267082c6b6" containerID="109a1d01b2b388822b4017533289f525bb0875693261feeb825b93643fe2bf46" exitCode=0 Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.041078 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerDied","Data":"109a1d01b2b388822b4017533289f525bb0875693261feeb825b93643fe2bf46"} Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.045373 4739 generic.go:334] "Generic (PLEG): container finished" podID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerID="0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429" exitCode=0 Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.045415 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429"} Feb 18 14:26:29 crc kubenswrapper[4739]: I0218 14:26:29.411595 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:29 crc kubenswrapper[4739]: E0218 14:26:29.411919 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:30 crc kubenswrapper[4739]: I0218 14:26:30.957936 4739 scope.go:117] "RemoveContainer" containerID="4041330ab9876dd3ccc3269fd63191d50dd8718454d5e9168b48f08746b23647" Feb 18 14:26:31 crc kubenswrapper[4739]: I0218 14:26:31.067028 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"83da58fc-6d28-4a56-abc1-00267082c6b6","Type":"ContainerStarted","Data":"c4b04fa02b67b0be2421cd52f673e42500986256f3427c38976b1dc14f3dd2b4"} Feb 18 14:26:31 crc kubenswrapper[4739]: I0218 14:26:31.779711 4739 scope.go:117] "RemoveContainer" containerID="405502ac3609c5b3fd9875f3041040fcb2500cda1197ef6aa5109c839a432fea" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.084657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c71b6fb5-d59d-479d-b3fc-996d14bd93ed","Type":"ContainerStarted","Data":"94fb4b4e0ed1e4354cf0fd45d810ad5a001321ba13ecffee37c3fca4d8107def"} Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.084729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.085174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.115250 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.11522351 podStartE2EDuration="47.11522351s" podCreationTimestamp="2026-02-18 14:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:32.105369963 +0000 UTC m=+1624.601090895" watchObservedRunningTime="2026-02-18 14:26:32.11522351 +0000 UTC m=+1624.610944442" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.134484 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=40.134465462 podStartE2EDuration="40.134465462s" podCreationTimestamp="2026-02-18 14:25:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:26:32.126345068 +0000 UTC m=+1624.622065990" watchObservedRunningTime="2026-02-18 14:26:32.134465462 +0000 UTC m=+1624.630186404" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.534283 4739 scope.go:117] "RemoveContainer" containerID="17b7a228a9fbcf851aed446c2de3568b52fb77affe9764c39277650c860631aa" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.555063 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 14:26:32 crc kubenswrapper[4739]: I0218 14:26:32.612709 4739 scope.go:117] "RemoveContainer" containerID="7c4bb8b1c5394b1feff00226f10597657ca326d8c75003b9dcfbb17edea1d2b3" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.106424 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerStarted","Data":"ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843"} Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.130938 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5"] Feb 18 14:26:33 crc kubenswrapper[4739]: E0218 14:26:33.131563 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131578 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" Feb 18 14:26:33 crc kubenswrapper[4739]: E0218 14:26:33.131597 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131606 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131828 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="418a2d42-e21e-4d0d-b295-3178e079431c" containerName="heat-cfnapi" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.131855 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d4949b-6d9f-425e-b02f-d8caa727ed99" containerName="heat-api" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.132678 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.134897 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.135101 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.135134 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.135470 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-k8bxr" podStartSLOduration=1.672601271 podStartE2EDuration="10.135453215s" podCreationTimestamp="2026-02-18 14:26:23 +0000 UTC" firstStartedPulling="2026-02-18 14:26:24.083029393 +0000 UTC m=+1616.578750315" lastFinishedPulling="2026-02-18 14:26:32.545881337 +0000 UTC m=+1625.041602259" observedRunningTime="2026-02-18 14:26:33.126608933 +0000 UTC m=+1625.622329865" watchObservedRunningTime="2026-02-18 14:26:33.135453215 +0000 UTC m=+1625.631174137" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.141492 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.179770 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5"] Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.216902 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.217314 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.217478 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.217554 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320463 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320531 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.320581 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.330558 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.331175 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.334912 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.340656 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:33 crc kubenswrapper[4739]: I0218 14:26:33.451701 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:26:34 crc kubenswrapper[4739]: I0218 14:26:34.137192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerStarted","Data":"bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4"} Feb 18 14:26:34 crc kubenswrapper[4739]: I0218 14:26:34.194355 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8nfdw" podStartSLOduration=3.062977521 podStartE2EDuration="13.194333229s" podCreationTimestamp="2026-02-18 14:26:21 +0000 UTC" firstStartedPulling="2026-02-18 14:26:22.926972455 +0000 UTC m=+1615.422693377" lastFinishedPulling="2026-02-18 14:26:33.058328163 +0000 UTC m=+1625.554049085" observedRunningTime="2026-02-18 14:26:34.168618675 +0000 UTC m=+1626.664339607" watchObservedRunningTime="2026-02-18 14:26:34.194333229 +0000 UTC m=+1626.690054151" Feb 18 14:26:34 crc kubenswrapper[4739]: I0218 14:26:34.629798 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5"] Feb 18 14:26:35 crc kubenswrapper[4739]: I0218 14:26:35.166292 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerStarted","Data":"b8cb9ed99d22914d1a0e1925f4de01b4e33640477ab310a82f26be58456df960"} Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.017713 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.023096 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.024752 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:38 crc kubenswrapper[4739]: E0218 14:26:38.024839 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:41 crc kubenswrapper[4739]: I0218 14:26:41.447325 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:41 crc kubenswrapper[4739]: I0218 14:26:41.447899 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:41 crc kubenswrapper[4739]: I0218 14:26:41.514015 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:42 crc kubenswrapper[4739]: I0218 14:26:42.308866 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:42 crc kubenswrapper[4739]: I0218 14:26:42.364863 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:43 crc kubenswrapper[4739]: I0218 14:26:43.104776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="83da58fc-6d28-4a56-abc1-00267082c6b6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.16:5671: connect: connection refused" Feb 18 14:26:43 crc kubenswrapper[4739]: I0218 14:26:43.410551 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:43 crc kubenswrapper[4739]: E0218 14:26:43.410854 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:44 crc kubenswrapper[4739]: I0218 14:26:44.275654 4739 generic.go:334] "Generic (PLEG): container finished" podID="9b3545e1-27f7-421f-9471-809d6b04706d" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" exitCode=0 Feb 18 14:26:44 crc kubenswrapper[4739]: I0218 14:26:44.276198 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8nfdw" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" containerID="cri-o://bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4" gracePeriod=2 Feb 18 14:26:44 crc kubenswrapper[4739]: I0218 14:26:44.275869 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerDied","Data":"783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b"} Feb 18 14:26:45 crc kubenswrapper[4739]: I0218 14:26:45.292199 4739 generic.go:334] "Generic (PLEG): container finished" podID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerID="bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4" exitCode=0 Feb 18 14:26:45 crc kubenswrapper[4739]: I0218 14:26:45.292248 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4"} Feb 18 14:26:46 crc kubenswrapper[4739]: I0218 14:26:46.048231 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c71b6fb5-d59d-479d-b3fc-996d14bd93ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.15:5671: connect: connection refused" Feb 18 14:26:46 crc kubenswrapper[4739]: I0218 14:26:46.305877 4739 generic.go:334] "Generic (PLEG): container finished" podID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerID="ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843" exitCode=0 Feb 18 14:26:46 crc kubenswrapper[4739]: I0218 14:26:46.305922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerDied","Data":"ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843"} Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.015765 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.017090 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.017723 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 14:26:48 crc kubenswrapper[4739]: E0218 14:26:48.017758 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-cf66499c9-k855m" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.310076 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.411781 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-k8bxr" event={"ID":"18e3b1f2-e16d-4800-90db-c4cc03f891c3","Type":"ContainerDied","Data":"a9b6431a1e4c3fdb163f771f15f65db97a8f232887dad7bee508d0c10d0724b9"} Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.412033 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b6431a1e4c3fdb163f771f15f65db97a8f232887dad7bee508d0c10d0724b9" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.412084 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-k8bxr" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.480949 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.481064 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.481267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.481342 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") pod \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\" (UID: \"18e3b1f2-e16d-4800-90db-c4cc03f891c3\") " Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.500293 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6" (OuterVolumeSpecName: "kube-api-access-h4pw6") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "kube-api-access-h4pw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.522089 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts" (OuterVolumeSpecName: "scripts") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.528959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data" (OuterVolumeSpecName: "config-data") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.560414 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18e3b1f2-e16d-4800-90db-c4cc03f891c3" (UID: "18e3b1f2-e16d-4800-90db-c4cc03f891c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.589399 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.589426 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.589437 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18e3b1f2-e16d-4800-90db-c4cc03f891c3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:49 crc kubenswrapper[4739]: I0218 14:26:49.593526 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pw6\" (UniqueName: \"kubernetes.io/projected/18e3b1f2-e16d-4800-90db-c4cc03f891c3-kube-api-access-h4pw6\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.046779 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.060654 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.214298 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") pod \"96072604-db66-4bc5-98a7-c62c2d76eb40\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.214930 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") pod \"96072604-db66-4bc5-98a7-c62c2d76eb40\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215062 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215132 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") pod \"96072604-db66-4bc5-98a7-c62c2d76eb40\" (UID: \"96072604-db66-4bc5-98a7-c62c2d76eb40\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.215250 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") pod \"9b3545e1-27f7-421f-9471-809d6b04706d\" (UID: \"9b3545e1-27f7-421f-9471-809d6b04706d\") " Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.216887 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities" (OuterVolumeSpecName: "utilities") pod "96072604-db66-4bc5-98a7-c62c2d76eb40" (UID: "96072604-db66-4bc5-98a7-c62c2d76eb40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.220092 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.222131 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t" (OuterVolumeSpecName: "kube-api-access-njr9t") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "kube-api-access-njr9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.223325 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h" (OuterVolumeSpecName: "kube-api-access-v6j2h") pod "96072604-db66-4bc5-98a7-c62c2d76eb40" (UID: "96072604-db66-4bc5-98a7-c62c2d76eb40"). InnerVolumeSpecName "kube-api-access-v6j2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.254705 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.258247 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96072604-db66-4bc5-98a7-c62c2d76eb40" (UID: "96072604-db66-4bc5-98a7-c62c2d76eb40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.296930 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data" (OuterVolumeSpecName: "config-data") pod "9b3545e1-27f7-421f-9471-809d6b04706d" (UID: "9b3545e1-27f7-421f-9471-809d6b04706d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317165 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317197 4739 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317211 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317220 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njr9t\" (UniqueName: \"kubernetes.io/projected/9b3545e1-27f7-421f-9471-809d6b04706d-kube-api-access-njr9t\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317230 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6j2h\" (UniqueName: \"kubernetes.io/projected/96072604-db66-4bc5-98a7-c62c2d76eb40-kube-api-access-v6j2h\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317237 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3545e1-27f7-421f-9471-809d6b04706d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.317245 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96072604-db66-4bc5-98a7-c62c2d76eb40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.440707 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nfdw" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.441314 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nfdw" event={"ID":"96072604-db66-4bc5-98a7-c62c2d76eb40","Type":"ContainerDied","Data":"1a594f45e975965087f3745b7e4424d1fb7c25896b803da09771f967762a7a70"} Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.441432 4739 scope.go:117] "RemoveContainer" containerID="bb9d0dc56ee769336340065dd5699e513fe035812eb92fd0d0e14c8dd10b87f4" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.444491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-cf66499c9-k855m" event={"ID":"9b3545e1-27f7-421f-9471-809d6b04706d","Type":"ContainerDied","Data":"34402e3be46581b4f11650c5f4f2ec4f1afe7d82b3230635fe9430959d1f9c69"} Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.444568 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-cf66499c9-k855m" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.486639 4739 scope.go:117] "RemoveContainer" containerID="0ac768b310244a0425581589d8a72607c1c9ad5cfef99e8994bfd0a2fa8cd429" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.491338 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.508583 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8nfdw"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.522591 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.526356 4739 scope.go:117] "RemoveContainer" containerID="759a170bc779a35f3b7259369c90f0aabe4f5a98e1cd13a17bb561eef1c0e510" Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.546329 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-cf66499c9-k855m"] Feb 18 14:26:50 crc kubenswrapper[4739]: I0218 14:26:50.590466 4739 scope.go:117] "RemoveContainer" containerID="783fa9b6fd10cf147608ee1996396bbf542a018813cd41eab1a6b667ec39a21b" Feb 18 14:26:51 crc kubenswrapper[4739]: I0218 14:26:51.456256 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerStarted","Data":"39ef6715d910bad18771b0adccde4ffddd06d4f64ddaf3ce90256b5a58ff4742"} Feb 18 14:26:51 crc kubenswrapper[4739]: I0218 14:26:51.477652 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" podStartSLOduration=3.050248937 podStartE2EDuration="18.477633184s" podCreationTimestamp="2026-02-18 14:26:33 +0000 UTC" firstStartedPulling="2026-02-18 14:26:34.647036219 +0000 UTC m=+1627.142757141" lastFinishedPulling="2026-02-18 14:26:50.074420466 +0000 UTC m=+1642.570141388" observedRunningTime="2026-02-18 14:26:51.475410518 +0000 UTC m=+1643.971131450" watchObservedRunningTime="2026-02-18 14:26:51.477633184 +0000 UTC m=+1643.973354106" Feb 18 14:26:52 crc kubenswrapper[4739]: I0218 14:26:52.426094 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" path="/var/lib/kubelet/pods/96072604-db66-4bc5-98a7-c62c2d76eb40/volumes" Feb 18 14:26:52 crc kubenswrapper[4739]: I0218 14:26:52.427389 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" path="/var/lib/kubelet/pods/9b3545e1-27f7-421f-9471-809d6b04706d/volumes" Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.103646 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.186061 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.271696 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272013 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" containerID="cri-o://ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" gracePeriod=30 Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272319 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" containerID="cri-o://d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" gracePeriod=30 Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272522 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" containerID="cri-o://0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" gracePeriod=30 Feb 18 14:26:53 crc kubenswrapper[4739]: I0218 14:26:53.272726 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" containerID="cri-o://5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" gracePeriod=30 Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.549809 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" exitCode=0 Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.550133 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" exitCode=0 Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.549913 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} Feb 18 14:26:54 crc kubenswrapper[4739]: I0218 14:26:54.550178 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} Feb 18 14:26:56 crc kubenswrapper[4739]: I0218 14:26:56.041607 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 14:26:58 crc kubenswrapper[4739]: I0218 14:26:58.344355 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" containerID="cri-o://86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6" gracePeriod=604795 Feb 18 14:26:58 crc kubenswrapper[4739]: I0218 14:26:58.423818 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:26:58 crc kubenswrapper[4739]: E0218 14:26:58.424379 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.235323 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336087 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336348 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336414 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336491 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.336562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") pod \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\" (UID: \"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e\") " Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.350740 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc" (OuterVolumeSpecName: "kube-api-access-6bhhc") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "kube-api-access-6bhhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.357610 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts" (OuterVolumeSpecName: "scripts") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.443584 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bhhc\" (UniqueName: \"kubernetes.io/projected/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-kube-api-access-6bhhc\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.443675 4739 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.461598 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.497011 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.546595 4739 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.548632 4739 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.558409 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.560259 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data" (OuterVolumeSpecName: "config-data") pod "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" (UID: "f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607041 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" exitCode=0 Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607347 4739 generic.go:334] "Generic (PLEG): container finished" podID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" exitCode=0 Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607134 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607119 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607490 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607503 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e","Type":"ContainerDied","Data":"4d6f0aeaea08a012f733e13300610a5640aaa1fafeeed5ec43bbbd5b2b9a8193"} Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.607519 4739 scope.go:117] "RemoveContainer" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.650777 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.650811 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.703143 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.719193 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.730216 4739 scope.go:117] "RemoveContainer" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745095 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745711 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerName="aodh-db-sync" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745732 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerName="aodh-db-sync" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745750 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745757 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745771 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745780 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745797 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745804 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745817 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-utilities" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745825 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-utilities" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745836 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745843 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745861 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745868 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745879 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745884 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.745899 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-content" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.745905 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="extract-content" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746135 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" containerName="aodh-db-sync" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746154 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-api" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746165 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-notifier" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746178 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-evaluator" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746199 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" containerName="aodh-listener" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746213 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3545e1-27f7-421f-9471-809d6b04706d" containerName="heat-engine" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.746227 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="96072604-db66-4bc5-98a7-c62c2d76eb40" containerName="registry-server" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.748308 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.754245 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.754351 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.754391 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.758815 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.763046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-747v8" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.766833 4739 scope.go:117] "RemoveContainer" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.773729 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.838691 4739 scope.go:117] "RemoveContainer" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860025 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-internal-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860103 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-public-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860276 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-scripts\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860304 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6q52\" (UniqueName: \"kubernetes.io/projected/44288fd5-6ac4-4d9f-b16e-97ae45b79030-kube-api-access-l6q52\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860340 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-combined-ca-bundle\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.860418 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-config-data\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.906849 4739 scope.go:117] "RemoveContainer" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.908598 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": container with ID starting with 5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2 not found: ID does not exist" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.908640 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} err="failed to get container status \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": rpc error: code = NotFound desc = could not find container \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": container with ID starting with 5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.908671 4739 scope.go:117] "RemoveContainer" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.908948 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": container with ID starting with d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578 not found: ID does not exist" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.908985 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} err="failed to get container status \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": rpc error: code = NotFound desc = could not find container \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": container with ID starting with d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.909000 4739 scope.go:117] "RemoveContainer" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.909643 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": container with ID starting with 0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba not found: ID does not exist" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.909697 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} err="failed to get container status \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": rpc error: code = NotFound desc = could not find container \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": container with ID starting with 0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.909723 4739 scope.go:117] "RemoveContainer" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: E0218 14:26:59.910602 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": container with ID starting with ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7 not found: ID does not exist" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.910642 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} err="failed to get container status \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": rpc error: code = NotFound desc = could not find container \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": container with ID starting with ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.910669 4739 scope.go:117] "RemoveContainer" containerID="5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.912608 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2"} err="failed to get container status \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": rpc error: code = NotFound desc = could not find container \"5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2\": container with ID starting with 5248a54c88f06ba30f0e894f0ce4c14d76a8109ce322da2f55602e40291503a2 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.912633 4739 scope.go:117] "RemoveContainer" containerID="d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913030 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578"} err="failed to get container status \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": rpc error: code = NotFound desc = could not find container \"d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578\": container with ID starting with d504fadc1d0a3c0bae033263265552e3bc82a4fe1ab5756ab741130de2590578 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913070 4739 scope.go:117] "RemoveContainer" containerID="0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913339 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba"} err="failed to get container status \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": rpc error: code = NotFound desc = could not find container \"0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba\": container with ID starting with 0fd9d5c70ca6c29a59349415385e4f7b600cd04a44fc9c9ff5cf7e584fccfcba not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913361 4739 scope.go:117] "RemoveContainer" containerID="ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.913605 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7"} err="failed to get container status \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": rpc error: code = NotFound desc = could not find container \"ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7\": container with ID starting with ddbf9584f347c75bdf993d5c775ac375f190f3ed1bd6dffc73608fe1333ae1d7 not found: ID does not exist" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-scripts\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962337 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6q52\" (UniqueName: \"kubernetes.io/projected/44288fd5-6ac4-4d9f-b16e-97ae45b79030-kube-api-access-l6q52\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962389 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-combined-ca-bundle\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962505 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-config-data\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.962715 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-internal-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.963096 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-public-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.968615 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-scripts\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.968972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-internal-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.969368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-public-tls-certs\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.971264 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-config-data\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.992246 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44288fd5-6ac4-4d9f-b16e-97ae45b79030-combined-ca-bundle\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:26:59 crc kubenswrapper[4739]: I0218 14:26:59.995153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6q52\" (UniqueName: \"kubernetes.io/projected/44288fd5-6ac4-4d9f-b16e-97ae45b79030-kube-api-access-l6q52\") pod \"aodh-0\" (UID: \"44288fd5-6ac4-4d9f-b16e-97ae45b79030\") " pod="openstack/aodh-0" Feb 18 14:27:00 crc kubenswrapper[4739]: I0218 14:27:00.073173 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 14:27:00 crc kubenswrapper[4739]: I0218 14:27:00.436394 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e" path="/var/lib/kubelet/pods/f7f699b8-95a0-4a37-8a9b-fb4bd7b46d3e/volumes" Feb 18 14:27:00 crc kubenswrapper[4739]: I0218 14:27:00.702154 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 14:27:01 crc kubenswrapper[4739]: I0218 14:27:01.632342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"403176e967830085678bd1996fdf275eb9c32a06ad422547dc22e7325fbbc439"} Feb 18 14:27:01 crc kubenswrapper[4739]: I0218 14:27:01.632969 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"2cb412dbed0abede87146ec8c9c134ce1a5e53ebf02003796582c9c6e0b8dbe0"} Feb 18 14:27:02 crc kubenswrapper[4739]: I0218 14:27:02.644151 4739 generic.go:334] "Generic (PLEG): container finished" podID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerID="39ef6715d910bad18771b0adccde4ffddd06d4f64ddaf3ce90256b5a58ff4742" exitCode=0 Feb 18 14:27:02 crc kubenswrapper[4739]: I0218 14:27:02.644238 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerDied","Data":"39ef6715d910bad18771b0adccde4ffddd06d4f64ddaf3ce90256b5a58ff4742"} Feb 18 14:27:03 crc kubenswrapper[4739]: I0218 14:27:03.114810 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Feb 18 14:27:03 crc kubenswrapper[4739]: I0218 14:27:03.657296 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"724a4db8cc6bc3613b9d2a784ef25e89105612b429c964224a1f1664c08d1bfd"} Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.670439 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" event={"ID":"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb","Type":"ContainerDied","Data":"b8cb9ed99d22914d1a0e1925f4de01b4e33640477ab310a82f26be58456df960"} Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.670969 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8cb9ed99d22914d1a0e1925f4de01b4e33640477ab310a82f26be58456df960" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.673857 4739 generic.go:334] "Generic (PLEG): container finished" podID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerID="86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6" exitCode=0 Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.673898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerDied","Data":"86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6"} Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.764034 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.814801 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.814873 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.814956 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.815076 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") pod \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\" (UID: \"888c24c8-ed9b-4434-b55c-d9f89ba3f0eb\") " Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.823707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.824649 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj" (OuterVolumeSpecName: "kube-api-access-fqkdj") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "kube-api-access-fqkdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.863015 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.868200 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory" (OuterVolumeSpecName: "inventory") pod "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" (UID: "888c24c8-ed9b-4434-b55c-d9f89ba3f0eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.918988 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.919038 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqkdj\" (UniqueName: \"kubernetes.io/projected/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-kube-api-access-fqkdj\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.919052 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:04 crc kubenswrapper[4739]: I0218 14:27:04.919066 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888c24c8-ed9b-4434-b55c-d9f89ba3f0eb-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.035499 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.687274 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.690514 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.690589 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"a5594aaa-fab3-4dad-b79e-17200bc2f1ee","Type":"ContainerDied","Data":"95dc6b6636dbaa09768645df6028b202c5114fe72bc89c98b8330cd58fee1cc8"} Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.690639 4739 scope.go:117] "RemoveContainer" containerID="86dcf3153be4cedc4f3f4f557f9adbf8d2dc9ddb02d52663f80236312bb555f6" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.856429 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc"] Feb 18 14:27:05 crc kubenswrapper[4739]: E0218 14:27:05.857091 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857114 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: E0218 14:27:05.857144 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="setup-container" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857152 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="setup-container" Feb 18 14:27:05 crc kubenswrapper[4739]: E0218 14:27:05.857177 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857185 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857483 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" containerName="rabbitmq" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.857519 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="888c24c8-ed9b-4434-b55c-d9f89ba3f0eb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.858524 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.862245 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.862749 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.862985 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.864201 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.868776 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc"] Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.994985 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995103 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995191 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995354 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995388 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995411 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995472 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995501 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995529 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.995626 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") pod \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\" (UID: \"a5594aaa-fab3-4dad-b79e-17200bc2f1ee\") " Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.996009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.996142 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.996188 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:05 crc kubenswrapper[4739]: I0218 14:27:05.997959 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:05.999595 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:05.999962 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.003109 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info" (OuterVolumeSpecName: "pod-info") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.004616 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx" (OuterVolumeSpecName: "kube-api-access-h92gx") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "kube-api-access-h92gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.008266 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.012699 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.015460 4739 scope.go:117] "RemoveContainer" containerID="a1e18a076520af601e6507f431aa025a06385212521ec627530586a088f11655" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.059340 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data" (OuterVolumeSpecName: "config-data") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099367 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099436 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099484 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099612 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099623 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099633 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099641 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099648 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099656 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92gx\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-kube-api-access-h92gx\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099664 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099672 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.099879 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf" (OuterVolumeSpecName: "server-conf") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.103737 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.106485 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.121229 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8lfnc\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.123493 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952" (OuterVolumeSpecName: "persistence") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "pvc-23b37086-b6fd-42dd-960e-d907e6689952". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.183368 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.205233 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") on node \"crc\" " Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.205269 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.260600 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.260765 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-23b37086-b6fd-42dd-960e-d907e6689952" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952") on node "crc" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.290349 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a5594aaa-fab3-4dad-b79e-17200bc2f1ee" (UID: "a5594aaa-fab3-4dad-b79e-17200bc2f1ee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.307066 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a5594aaa-fab3-4dad-b79e-17200bc2f1ee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.307104 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.380499 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.398888 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.445057 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5594aaa-fab3-4dad-b79e-17200bc2f1ee" path="/var/lib/kubelet/pods/a5594aaa-fab3-4dad-b79e-17200bc2f1ee/volumes" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.445984 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.450404 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.450511 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524413 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524509 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0100ca-60e4-40d3-afeb-f5da9513fdc1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524581 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524604 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdl2\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-kube-api-access-6rdl2\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524649 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524675 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524703 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524732 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0100ca-60e4-40d3-afeb-f5da9513fdc1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524830 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524888 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.524904 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-config-data\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627394 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627447 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdl2\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-kube-api-access-6rdl2\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627500 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627529 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627561 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627590 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0100ca-60e4-40d3-afeb-f5da9513fdc1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627678 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627754 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-config-data\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627864 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.627891 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0100ca-60e4-40d3-afeb-f5da9513fdc1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.629682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630158 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630520 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.630553 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1542ad1e95f6d05e9b33a4f8791d4ee2fe2b5bce9c9209ea9b163f0535bf4310/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.632583 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.632691 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0100ca-60e4-40d3-afeb-f5da9513fdc1-config-data\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.633948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de0100ca-60e4-40d3-afeb-f5da9513fdc1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.634110 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de0100ca-60e4-40d3-afeb-f5da9513fdc1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.636399 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.643921 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.650215 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdl2\" (UniqueName: \"kubernetes.io/projected/de0100ca-60e4-40d3-afeb-f5da9513fdc1-kube-api-access-6rdl2\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.742101 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"50a4a18ff9bb9857e30195f19a2bdc7011b61567cda085dd45f53667910cdcdf"} Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.744071 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23b37086-b6fd-42dd-960e-d907e6689952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23b37086-b6fd-42dd-960e-d907e6689952\") pod \"rabbitmq-server-1\" (UID: \"de0100ca-60e4-40d3-afeb-f5da9513fdc1\") " pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.781087 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 18 14:27:06 crc kubenswrapper[4739]: I0218 14:27:06.846486 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc"] Feb 18 14:27:06 crc kubenswrapper[4739]: W0218 14:27:06.853708 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba2cd97a_cec6_45bc_a08c_b179dc0f72d6.slice/crio-8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7 WatchSource:0}: Error finding container 8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7: Status 404 returned error can't find the container with id 8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7 Feb 18 14:27:07 crc kubenswrapper[4739]: I0218 14:27:07.416864 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 18 14:27:07 crc kubenswrapper[4739]: W0218 14:27:07.701979 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde0100ca_60e4_40d3_afeb_f5da9513fdc1.slice/crio-d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911 WatchSource:0}: Error finding container d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911: Status 404 returned error can't find the container with id d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911 Feb 18 14:27:07 crc kubenswrapper[4739]: I0218 14:27:07.762928 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerStarted","Data":"8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7"} Feb 18 14:27:07 crc kubenswrapper[4739]: I0218 14:27:07.764435 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerStarted","Data":"d01e8b6a388c8c4f0c5ba2b09aafc27911de4aef1e97385fa166a19662e34911"} Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.776729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"44288fd5-6ac4-4d9f-b16e-97ae45b79030","Type":"ContainerStarted","Data":"e9e866df911b5c966b3e84d602e9af4840ac454bf3dae29b5821cd170b689e34"} Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.778405 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerStarted","Data":"165b501719bb5519d62c583995defec1cc41f398b6ba2378ea6ec76af3514685"} Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.827892 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.748463634 podStartE2EDuration="9.827869245s" podCreationTimestamp="2026-02-18 14:26:59 +0000 UTC" firstStartedPulling="2026-02-18 14:27:00.709350717 +0000 UTC m=+1653.205071639" lastFinishedPulling="2026-02-18 14:27:07.788756328 +0000 UTC m=+1660.284477250" observedRunningTime="2026-02-18 14:27:08.798574562 +0000 UTC m=+1661.294295494" watchObservedRunningTime="2026-02-18 14:27:08.827869245 +0000 UTC m=+1661.323590167" Feb 18 14:27:08 crc kubenswrapper[4739]: I0218 14:27:08.842999 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" podStartSLOduration=2.921266056 podStartE2EDuration="3.842982084s" podCreationTimestamp="2026-02-18 14:27:05 +0000 UTC" firstStartedPulling="2026-02-18 14:27:06.868308721 +0000 UTC m=+1659.364029643" lastFinishedPulling="2026-02-18 14:27:07.790024749 +0000 UTC m=+1660.285745671" observedRunningTime="2026-02-18 14:27:08.824557662 +0000 UTC m=+1661.320278584" watchObservedRunningTime="2026-02-18 14:27:08.842982084 +0000 UTC m=+1661.338703006" Feb 18 14:27:09 crc kubenswrapper[4739]: I0218 14:27:09.790880 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerStarted","Data":"6dd00087a808c5662ace512584ad8a0d61f186a6d9327c0016591eca1cbb805c"} Feb 18 14:27:11 crc kubenswrapper[4739]: I0218 14:27:11.813054 4739 generic.go:334] "Generic (PLEG): container finished" podID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerID="165b501719bb5519d62c583995defec1cc41f398b6ba2378ea6ec76af3514685" exitCode=0 Feb 18 14:27:11 crc kubenswrapper[4739]: I0218 14:27:11.813153 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerDied","Data":"165b501719bb5519d62c583995defec1cc41f398b6ba2378ea6ec76af3514685"} Feb 18 14:27:12 crc kubenswrapper[4739]: I0218 14:27:12.417485 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:12 crc kubenswrapper[4739]: E0218 14:27:12.418018 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.327115 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.411630 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") pod \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.411847 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") pod \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.411932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") pod \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\" (UID: \"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6\") " Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.417304 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx" (OuterVolumeSpecName: "kube-api-access-5r4rx") pod "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" (UID: "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6"). InnerVolumeSpecName "kube-api-access-5r4rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.447167 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" (UID: "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.447574 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory" (OuterVolumeSpecName: "inventory") pod "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" (UID: "ba2cd97a-cec6-45bc-a08c-b179dc0f72d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.518348 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r4rx\" (UniqueName: \"kubernetes.io/projected/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-kube-api-access-5r4rx\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.518409 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.518421 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba2cd97a-cec6-45bc-a08c-b179dc0f72d6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.838706 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" event={"ID":"ba2cd97a-cec6-45bc-a08c-b179dc0f72d6","Type":"ContainerDied","Data":"8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7"} Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.839028 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c6729cb7149b5a622949267e553fc0e1167817c937b501ab0e41e3f842cfcd7" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.838864 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8lfnc" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.935573 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f"] Feb 18 14:27:13 crc kubenswrapper[4739]: E0218 14:27:13.936135 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.936152 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.936488 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2cd97a-cec6-45bc-a08c-b179dc0f72d6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.937303 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939500 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939671 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939572 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.939942 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:27:13 crc kubenswrapper[4739]: I0218 14:27:13.963782 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f"] Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.031765 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.032080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.032229 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.032381 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.134798 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.134876 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.134980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.135141 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.139756 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.140233 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.152466 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.162104 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.265061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:27:14 crc kubenswrapper[4739]: W0218 14:27:14.841323 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64a6af44_5f38_4ac7_a370_74b190762136.slice/crio-1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3 WatchSource:0}: Error finding container 1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3: Status 404 returned error can't find the container with id 1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3 Feb 18 14:27:14 crc kubenswrapper[4739]: I0218 14:27:14.842996 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f"] Feb 18 14:27:15 crc kubenswrapper[4739]: I0218 14:27:15.867321 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerStarted","Data":"693161be45d8d36fda8c2d4dc95d7bad1c0a7d87875be1b93f225b971a6de51d"} Feb 18 14:27:15 crc kubenswrapper[4739]: I0218 14:27:15.867940 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerStarted","Data":"1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3"} Feb 18 14:27:15 crc kubenswrapper[4739]: I0218 14:27:15.897411 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" podStartSLOduration=2.495766389 podStartE2EDuration="2.897394149s" podCreationTimestamp="2026-02-18 14:27:13 +0000 UTC" firstStartedPulling="2026-02-18 14:27:14.843662385 +0000 UTC m=+1667.339383307" lastFinishedPulling="2026-02-18 14:27:15.245290145 +0000 UTC m=+1667.741011067" observedRunningTime="2026-02-18 14:27:15.884793473 +0000 UTC m=+1668.380514415" watchObservedRunningTime="2026-02-18 14:27:15.897394149 +0000 UTC m=+1668.393115071" Feb 18 14:27:27 crc kubenswrapper[4739]: I0218 14:27:27.410429 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:27 crc kubenswrapper[4739]: E0218 14:27:27.411318 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:33 crc kubenswrapper[4739]: I0218 14:27:33.236024 4739 scope.go:117] "RemoveContainer" containerID="0a9c96ef9bc05a189057147729fcd0a7c0a62f199e816b285da0bdde192dbc40" Feb 18 14:27:33 crc kubenswrapper[4739]: I0218 14:27:33.296390 4739 scope.go:117] "RemoveContainer" containerID="cb1eddfed9e44b497a97463dd1b3569fad968271c4c4d74bfb3de94948277b04" Feb 18 14:27:41 crc kubenswrapper[4739]: I0218 14:27:41.412151 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:41 crc kubenswrapper[4739]: E0218 14:27:41.412889 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:42 crc kubenswrapper[4739]: I0218 14:27:42.157286 4739 generic.go:334] "Generic (PLEG): container finished" podID="de0100ca-60e4-40d3-afeb-f5da9513fdc1" containerID="6dd00087a808c5662ace512584ad8a0d61f186a6d9327c0016591eca1cbb805c" exitCode=0 Feb 18 14:27:42 crc kubenswrapper[4739]: I0218 14:27:42.157370 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerDied","Data":"6dd00087a808c5662ace512584ad8a0d61f186a6d9327c0016591eca1cbb805c"} Feb 18 14:27:43 crc kubenswrapper[4739]: I0218 14:27:43.169676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"de0100ca-60e4-40d3-afeb-f5da9513fdc1","Type":"ContainerStarted","Data":"8b34ba5d73f3b358eb72273b94ce8f47208dc2fb18816f449b33e731312474a3"} Feb 18 14:27:43 crc kubenswrapper[4739]: I0218 14:27:43.170202 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 18 14:27:43 crc kubenswrapper[4739]: I0218 14:27:43.204516 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.204498239 podStartE2EDuration="37.204498239s" podCreationTimestamp="2026-02-18 14:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:27:43.198658763 +0000 UTC m=+1695.694379695" watchObservedRunningTime="2026-02-18 14:27:43.204498239 +0000 UTC m=+1695.700219161" Feb 18 14:27:54 crc kubenswrapper[4739]: I0218 14:27:54.411260 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:27:54 crc kubenswrapper[4739]: E0218 14:27:54.412737 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:27:56 crc kubenswrapper[4739]: I0218 14:27:56.785657 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 18 14:27:56 crc kubenswrapper[4739]: I0218 14:27:56.861070 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:01 crc kubenswrapper[4739]: I0218 14:28:01.642289 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" containerID="cri-o://9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" gracePeriod=604796 Feb 18 14:28:03 crc kubenswrapper[4739]: I0218 14:28:03.091019 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 18 14:28:06 crc kubenswrapper[4739]: I0218 14:28:06.411542 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:06 crc kubenswrapper[4739]: E0218 14:28:06.412162 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.407702 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498409 4739 generic.go:334] "Generic (PLEG): container finished" podID="70500a97-2717-4761-884a-25cf8ab89380" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" exitCode=0 Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498470 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerDied","Data":"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef"} Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"70500a97-2717-4761-884a-25cf8ab89380","Type":"ContainerDied","Data":"6a1064f065e3c36cfd11b4abc66439e09b22ce13fc43d0cfe21f9e1ccc93bcec"} Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498514 4739 scope.go:117] "RemoveContainer" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.498678 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.529528 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.529581 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.529611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530267 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530305 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530424 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530629 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530680 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530696 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.530753 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") pod \"70500a97-2717-4761-884a-25cf8ab89380\" (UID: \"70500a97-2717-4761-884a-25cf8ab89380\") " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.531269 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.531685 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.532810 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.534545 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.538733 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd" (OuterVolumeSpecName: "kube-api-access-xqscd") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "kube-api-access-xqscd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.554011 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info" (OuterVolumeSpecName: "pod-info") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.554087 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.554606 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.571546 4739 scope.go:117] "RemoveContainer" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.594053 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data" (OuterVolumeSpecName: "config-data") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.594865 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd" (OuterVolumeSpecName: "persistence") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642037 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642069 4739 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70500a97-2717-4761-884a-25cf8ab89380-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642079 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqscd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-kube-api-access-xqscd\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642108 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") on node \"crc\" " Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642118 4739 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70500a97-2717-4761-884a-25cf8ab89380-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642130 4739 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642142 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.642151 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.647185 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf" (OuterVolumeSpecName: "server-conf") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.695747 4739 scope.go:117] "RemoveContainer" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.696254 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef\": container with ID starting with 9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef not found: ID does not exist" containerID="9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.696299 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef"} err="failed to get container status \"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef\": rpc error: code = NotFound desc = could not find container \"9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef\": container with ID starting with 9e4a7fe4f7813b79f3b17bc08e94b5920a4dddae3d81961c9d28439f54dd64ef not found: ID does not exist" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.696329 4739 scope.go:117] "RemoveContainer" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.699618 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886\": container with ID starting with 50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886 not found: ID does not exist" containerID="50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.699701 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886"} err="failed to get container status \"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886\": rpc error: code = NotFound desc = could not find container \"50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886\": container with ID starting with 50c02016a55a2c9e373d088514e04b072451dfe1867c0fb7a51a817add5d6886 not found: ID does not exist" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.712134 4739 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.712296 4739 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd") on node "crc" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.746808 4739 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70500a97-2717-4761-884a-25cf8ab89380-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.746870 4739 reconciler_common.go:293] "Volume detached for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.754147 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "70500a97-2717-4761-884a-25cf8ab89380" (UID: "70500a97-2717-4761-884a-25cf8ab89380"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.842214 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.849792 4739 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70500a97-2717-4761-884a-25cf8ab89380-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.855383 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.873654 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.874280 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="setup-container" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.874300 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="setup-container" Feb 18 14:28:08 crc kubenswrapper[4739]: E0218 14:28:08.874331 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.874340 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.874657 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="70500a97-2717-4761-884a-25cf8ab89380" containerName="rabbitmq" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.876280 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.921848 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952312 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952378 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952468 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952488 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd925294-7441-4ba8-af23-290ef19deb9b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952507 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ch44\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-kube-api-access-9ch44\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952587 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952650 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952699 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952728 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952773 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd925294-7441-4ba8-af23-290ef19deb9b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:08 crc kubenswrapper[4739]: I0218 14:28:08.952798 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.057986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058085 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058131 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058163 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058206 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd925294-7441-4ba8-af23-290ef19deb9b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058240 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058317 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058349 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058414 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058438 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd925294-7441-4ba8-af23-290ef19deb9b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.058479 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ch44\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-kube-api-access-9ch44\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.066422 4739 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.066480 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0e4a135f402bfdd87a0dd9dc00d6afd10d61dd6559041546aff07ddf4aa84ac2/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.066891 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.069212 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.076957 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.079368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.079973 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.093839 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bd925294-7441-4ba8-af23-290ef19deb9b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.094320 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd925294-7441-4ba8-af23-290ef19deb9b-config-data\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.097754 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bd925294-7441-4ba8-af23-290ef19deb9b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.105395 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bd925294-7441-4ba8-af23-290ef19deb9b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.135385 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ch44\" (UniqueName: \"kubernetes.io/projected/bd925294-7441-4ba8-af23-290ef19deb9b-kube-api-access-9ch44\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.403627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9527d74b-526e-46aa-af76-86cd0a1b17cd\") pod \"rabbitmq-server-0\" (UID: \"bd925294-7441-4ba8-af23-290ef19deb9b\") " pod="openstack/rabbitmq-server-0" Feb 18 14:28:09 crc kubenswrapper[4739]: I0218 14:28:09.589584 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 14:28:10 crc kubenswrapper[4739]: I0218 14:28:10.115595 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 14:28:10 crc kubenswrapper[4739]: I0218 14:28:10.422109 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70500a97-2717-4761-884a-25cf8ab89380" path="/var/lib/kubelet/pods/70500a97-2717-4761-884a-25cf8ab89380/volumes" Feb 18 14:28:10 crc kubenswrapper[4739]: I0218 14:28:10.530223 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerStarted","Data":"a90863a928903c3aac9369cd5894ed94762e95ec15acbb20ff0c0a3eebfb3eb1"} Feb 18 14:28:12 crc kubenswrapper[4739]: I0218 14:28:12.555031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerStarted","Data":"1a13255ed1ef9e684006b83a2f8cf160ca9eedb6ed2033c5fcf1a517209655e1"} Feb 18 14:28:18 crc kubenswrapper[4739]: I0218 14:28:18.426626 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:18 crc kubenswrapper[4739]: E0218 14:28:18.427732 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:33 crc kubenswrapper[4739]: I0218 14:28:33.410304 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:33 crc kubenswrapper[4739]: E0218 14:28:33.411198 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:33 crc kubenswrapper[4739]: I0218 14:28:33.496476 4739 scope.go:117] "RemoveContainer" containerID="f3277f9c953c856503e9f54f23df005c12ffcd64974ef18efe5d6f5daaca7db8" Feb 18 14:28:33 crc kubenswrapper[4739]: I0218 14:28:33.523111 4739 scope.go:117] "RemoveContainer" containerID="51c86b3e76646ccace7cb768aa196771df840d5aa0602f13a9e3d3f8fd198f42" Feb 18 14:28:43 crc kubenswrapper[4739]: I0218 14:28:43.913178 4739 generic.go:334] "Generic (PLEG): container finished" podID="bd925294-7441-4ba8-af23-290ef19deb9b" containerID="1a13255ed1ef9e684006b83a2f8cf160ca9eedb6ed2033c5fcf1a517209655e1" exitCode=0 Feb 18 14:28:43 crc kubenswrapper[4739]: I0218 14:28:43.913437 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerDied","Data":"1a13255ed1ef9e684006b83a2f8cf160ca9eedb6ed2033c5fcf1a517209655e1"} Feb 18 14:28:44 crc kubenswrapper[4739]: I0218 14:28:44.927118 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bd925294-7441-4ba8-af23-290ef19deb9b","Type":"ContainerStarted","Data":"e3d729500906cf43dc6d40a9f3c8718a85d4049bcf52d0fc7ee100523b3b2d83"} Feb 18 14:28:44 crc kubenswrapper[4739]: I0218 14:28:44.927920 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 14:28:44 crc kubenswrapper[4739]: I0218 14:28:44.966496 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.966446251 podStartE2EDuration="36.966446251s" podCreationTimestamp="2026-02-18 14:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:28:44.947307167 +0000 UTC m=+1757.443028109" watchObservedRunningTime="2026-02-18 14:28:44.966446251 +0000 UTC m=+1757.462167173" Feb 18 14:28:47 crc kubenswrapper[4739]: I0218 14:28:47.410805 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:28:47 crc kubenswrapper[4739]: E0218 14:28:47.411723 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:28:59 crc kubenswrapper[4739]: I0218 14:28:59.593624 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 14:29:01 crc kubenswrapper[4739]: I0218 14:29:01.410926 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:29:01 crc kubenswrapper[4739]: E0218 14:29:01.411540 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:29:14 crc kubenswrapper[4739]: I0218 14:29:14.410434 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:29:14 crc kubenswrapper[4739]: E0218 14:29:14.411137 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:29:29 crc kubenswrapper[4739]: I0218 14:29:29.410941 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:29:30 crc kubenswrapper[4739]: I0218 14:29:30.445844 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334"} Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.051907 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.066402 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.081689 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.093206 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d1e3-account-create-update-27rvz"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.104328 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-nndld"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.115159 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fwtxs"] Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.426279 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075a587a-4bf2-43e9-8c63-1357e9cb05c9" path="/var/lib/kubelet/pods/075a587a-4bf2-43e9-8c63-1357e9cb05c9/volumes" Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.428181 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08bf9ca-ebbc-4d72-b227-20a5c7eed529" path="/var/lib/kubelet/pods/b08bf9ca-ebbc-4d72-b227-20a5c7eed529/volumes" Feb 18 14:29:32 crc kubenswrapper[4739]: I0218 14:29:32.429494 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66" path="/var/lib/kubelet/pods/c3ec6cdb-5d2b-447d-a7e6-68b33fd2ba66/volumes" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.061515 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.078654 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.090892 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-m9bmk"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.102702 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4dc5-account-create-update-shnqq"] Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.610736 4739 scope.go:117] "RemoveContainer" containerID="68e9714ba536a43d37501d6b7f010d3c6c39bb5acb025c1ebc16c210fbdc0c5c" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.635868 4739 scope.go:117] "RemoveContainer" containerID="4436b566cc1f05e9fd1f4a6b477aee31ea85c52d7a160c7100ca69ed4da051cd" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.664763 4739 scope.go:117] "RemoveContainer" containerID="a772895e8b9301fae88d05626c6575b52b2a6a8650d7cff35a137c777919497f" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.696466 4739 scope.go:117] "RemoveContainer" containerID="0bb35ababf8f49716c465fd1a071a3fc61371f1c41007f69d57d1ece07a81b5b" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.779218 4739 scope.go:117] "RemoveContainer" containerID="0ff92f634c028d5fd31e4fe14bc0e896efd80534f8071fbf418f38d2b982dd3d" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.809399 4739 scope.go:117] "RemoveContainer" containerID="cbc19c6c86655aa18f2e8592ecad70f9e15a7d8e6a21338195448e4c95da6205" Feb 18 14:29:33 crc kubenswrapper[4739]: I0218 14:29:33.872891 4739 scope.go:117] "RemoveContainer" containerID="3e20d5bc67da999c67b2b030638e14f2a7846dbe20d76ce5dce6686024c72645" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.032742 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.047646 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.057739 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.068882 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-84ff-account-create-update-9xb4v"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.079243 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x8lmx"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.091507 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-973a-account-create-update-lsz5w"] Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.424814 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0275833c-ab0c-4865-9c6e-5c8d54a5e238" path="/var/lib/kubelet/pods/0275833c-ab0c-4865-9c6e-5c8d54a5e238/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.425946 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4c634d-6e65-4f6b-8001-0ac3e35a4801" path="/var/lib/kubelet/pods/8e4c634d-6e65-4f6b-8001-0ac3e35a4801/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.426585 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50e4a24-ad83-4694-be4d-6b0811726c3d" path="/var/lib/kubelet/pods/c50e4a24-ad83-4694-be4d-6b0811726c3d/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.427192 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1637477-36b3-4dea-b260-15b6e2532af8" path="/var/lib/kubelet/pods/e1637477-36b3-4dea-b260-15b6e2532af8/volumes" Feb 18 14:29:34 crc kubenswrapper[4739]: I0218 14:29:34.428859 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c94ce9-7b1b-43bd-9c93-303d0e675809" path="/var/lib/kubelet/pods/f8c94ce9-7b1b-43bd-9c93-303d0e675809/volumes" Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.069543 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.085244 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.098974 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-n6kgm"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.110805 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-d06e-account-create-update-nwqxj"] Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.428178 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4689ea28-dac4-434f-af87-18d6fc903330" path="/var/lib/kubelet/pods/4689ea28-dac4-434f-af87-18d6fc903330/volumes" Feb 18 14:29:46 crc kubenswrapper[4739]: I0218 14:29:46.429165 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff" path="/var/lib/kubelet/pods/b0b9a6cb-633e-4390-b1f9-048bc4a7a6ff/volumes" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.198176 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.202725 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.207139 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.207826 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.273880 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.383799 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.383911 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.384009 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.485749 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.486038 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.486224 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.487591 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.492293 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.502838 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"collect-profiles-29523750-sws8j\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:00 crc kubenswrapper[4739]: I0218 14:30:00.545170 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.074388 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.816322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerStarted","Data":"9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427"} Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.816657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerStarted","Data":"5af4dfb26a353ffc2911e046aec158bba417dabe58af26b40fa241b99d809ff5"} Feb 18 14:30:01 crc kubenswrapper[4739]: I0218 14:30:01.836781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" podStartSLOduration=1.836764562 podStartE2EDuration="1.836764562s" podCreationTimestamp="2026-02-18 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:30:01.830422508 +0000 UTC m=+1834.326143440" watchObservedRunningTime="2026-02-18 14:30:01.836764562 +0000 UTC m=+1834.332485504" Feb 18 14:30:02 crc kubenswrapper[4739]: I0218 14:30:02.829882 4739 generic.go:334] "Generic (PLEG): container finished" podID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerID="9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427" exitCode=0 Feb 18 14:30:02 crc kubenswrapper[4739]: I0218 14:30:02.830972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerDied","Data":"9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427"} Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.313954 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.399399 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") pod \"87fcc484-b43a-4471-9ae0-a8af18a937be\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.399607 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") pod \"87fcc484-b43a-4471-9ae0-a8af18a937be\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.399847 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") pod \"87fcc484-b43a-4471-9ae0-a8af18a937be\" (UID: \"87fcc484-b43a-4471-9ae0-a8af18a937be\") " Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.401774 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume" (OuterVolumeSpecName: "config-volume") pod "87fcc484-b43a-4471-9ae0-a8af18a937be" (UID: "87fcc484-b43a-4471-9ae0-a8af18a937be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.407420 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87fcc484-b43a-4471-9ae0-a8af18a937be" (UID: "87fcc484-b43a-4471-9ae0-a8af18a937be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.407475 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl" (OuterVolumeSpecName: "kube-api-access-x67hl") pod "87fcc484-b43a-4471-9ae0-a8af18a937be" (UID: "87fcc484-b43a-4471-9ae0-a8af18a937be"). InnerVolumeSpecName "kube-api-access-x67hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.502512 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87fcc484-b43a-4471-9ae0-a8af18a937be-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.502548 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87fcc484-b43a-4471-9ae0-a8af18a937be-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.502560 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x67hl\" (UniqueName: \"kubernetes.io/projected/87fcc484-b43a-4471-9ae0-a8af18a937be-kube-api-access-x67hl\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.855845 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" event={"ID":"87fcc484-b43a-4471-9ae0-a8af18a937be","Type":"ContainerDied","Data":"5af4dfb26a353ffc2911e046aec158bba417dabe58af26b40fa241b99d809ff5"} Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.856166 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af4dfb26a353ffc2911e046aec158bba417dabe58af26b40fa241b99d809ff5" Feb 18 14:30:04 crc kubenswrapper[4739]: I0218 14:30:04.855925 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j" Feb 18 14:30:09 crc kubenswrapper[4739]: I0218 14:30:09.051302 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:30:09 crc kubenswrapper[4739]: I0218 14:30:09.064222 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2t2n6"] Feb 18 14:30:10 crc kubenswrapper[4739]: I0218 14:30:10.423551 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1df0b15-6927-4300-b034-6b5c3308320d" path="/var/lib/kubelet/pods/f1df0b15-6927-4300-b034-6b5c3308320d/volumes" Feb 18 14:30:20 crc kubenswrapper[4739]: I0218 14:30:20.081199 4739 generic.go:334] "Generic (PLEG): container finished" podID="64a6af44-5f38-4ac7-a370-74b190762136" containerID="693161be45d8d36fda8c2d4dc95d7bad1c0a7d87875be1b93f225b971a6de51d" exitCode=0 Feb 18 14:30:20 crc kubenswrapper[4739]: I0218 14:30:20.081319 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerDied","Data":"693161be45d8d36fda8c2d4dc95d7bad1c0a7d87875be1b93f225b971a6de51d"} Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.048602 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.059206 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.069207 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rlcgk"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.085190 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-tzg9c"] Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.594289 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736184 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736236 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736412 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.736643 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") pod \"64a6af44-5f38-4ac7-a370-74b190762136\" (UID: \"64a6af44-5f38-4ac7-a370-74b190762136\") " Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.743798 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl" (OuterVolumeSpecName: "kube-api-access-htwsl") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "kube-api-access-htwsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.744708 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.777022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.785867 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory" (OuterVolumeSpecName: "inventory") pod "64a6af44-5f38-4ac7-a370-74b190762136" (UID: "64a6af44-5f38-4ac7-a370-74b190762136"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841158 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841394 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htwsl\" (UniqueName: \"kubernetes.io/projected/64a6af44-5f38-4ac7-a370-74b190762136-kube-api-access-htwsl\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841411 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:21 crc kubenswrapper[4739]: I0218 14:30:21.841425 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64a6af44-5f38-4ac7-a370-74b190762136-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.034047 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.053097 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4km74"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.067814 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.084012 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.103138 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.107306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" event={"ID":"64a6af44-5f38-4ac7-a370-74b190762136","Type":"ContainerDied","Data":"1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3"} Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.107353 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dda38bf5e5a89e3a1c2a63b4204490abe8ce3663a76f18cac169be7d4899eb3" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.107392 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.120847 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6lzcd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.136371 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-1ad6-account-create-update-pz97t"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.152273 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d1d2-account-create-update-spvtj"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.168377 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.184266 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-c4dd-account-create-update-xvgtp"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.199200 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.229739 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64f1-account-create-update-9xxvd"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.250532 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv"] Feb 18 14:30:22 crc kubenswrapper[4739]: E0218 14:30:22.251129 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerName="collect-profiles" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251153 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerName="collect-profiles" Feb 18 14:30:22 crc kubenswrapper[4739]: E0218 14:30:22.251182 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64a6af44-5f38-4ac7-a370-74b190762136" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251191 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="64a6af44-5f38-4ac7-a370-74b190762136" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251408 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="64a6af44-5f38-4ac7-a370-74b190762136" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.251436 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" containerName="collect-profiles" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.252324 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255084 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255141 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255200 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.255198 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.261532 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv"] Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.362693 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.362783 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.362941 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.430791 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e0fc8a-5942-417e-9fbb-4f94536db193" path="/var/lib/kubelet/pods/20e0fc8a-5942-417e-9fbb-4f94536db193/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.437293 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a" path="/var/lib/kubelet/pods/26e7d1d7-d06e-4faf-8f75-b0f8d0fed56a/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.440327 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c90e24b-98c5-4e26-8819-a5ae1aef1102" path="/var/lib/kubelet/pods/2c90e24b-98c5-4e26-8819-a5ae1aef1102/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.443772 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bd8e39-8e54-46e1-8217-dbdd74be8a8c" path="/var/lib/kubelet/pods/39bd8e39-8e54-46e1-8217-dbdd74be8a8c/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.447641 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d208990-8bd6-4b82-bba8-200f5c7985d0" path="/var/lib/kubelet/pods/4d208990-8bd6-4b82-bba8-200f5c7985d0/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.450200 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e60ca77-b621-4dfc-8b92-89d8cad06bf0" path="/var/lib/kubelet/pods/4e60ca77-b621-4dfc-8b92-89d8cad06bf0/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.452746 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da457314-f1eb-477e-93c7-cf0d01e0f1e1" path="/var/lib/kubelet/pods/da457314-f1eb-477e-93c7-cf0d01e0f1e1/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.454642 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06df363-1196-4ba5-a5ba-d6e6c419a9d2" path="/var/lib/kubelet/pods/f06df363-1196-4ba5-a5ba-d6e6c419a9d2/volumes" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.465098 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.465181 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.465266 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.471165 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.473530 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.485247 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:22 crc kubenswrapper[4739]: I0218 14:30:22.577080 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:30:23 crc kubenswrapper[4739]: I0218 14:30:23.194913 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv"] Feb 18 14:30:23 crc kubenswrapper[4739]: I0218 14:30:23.199541 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:30:24 crc kubenswrapper[4739]: I0218 14:30:24.145291 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerStarted","Data":"82a6e9a5f9c5c80c3e4624efd3163c459809aa077df1e7712fd32ff2f63f2eaa"} Feb 18 14:30:25 crc kubenswrapper[4739]: I0218 14:30:25.167930 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerStarted","Data":"e36e9cea2e9509acd37c756569ccb607ef32b0c6a6cd144b690231f1e10fd4d3"} Feb 18 14:30:25 crc kubenswrapper[4739]: I0218 14:30:25.184353 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" podStartSLOduration=1.950107111 podStartE2EDuration="3.18433797s" podCreationTimestamp="2026-02-18 14:30:22 +0000 UTC" firstStartedPulling="2026-02-18 14:30:23.199227704 +0000 UTC m=+1855.694948626" lastFinishedPulling="2026-02-18 14:30:24.433458563 +0000 UTC m=+1856.929179485" observedRunningTime="2026-02-18 14:30:25.181838082 +0000 UTC m=+1857.677559004" watchObservedRunningTime="2026-02-18 14:30:25.18433797 +0000 UTC m=+1857.680058892" Feb 18 14:30:26 crc kubenswrapper[4739]: I0218 14:30:26.048585 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:30:26 crc kubenswrapper[4739]: I0218 14:30:26.065397 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gnm8m"] Feb 18 14:30:26 crc kubenswrapper[4739]: I0218 14:30:26.428273 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf3454e-4ac2-42a7-98b1-0f43065764c2" path="/var/lib/kubelet/pods/edf3454e-4ac2-42a7-98b1-0f43065764c2/volumes" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.039216 4739 scope.go:117] "RemoveContainer" containerID="03bcbac09256150553750b2ceb7fcb6d133193457a99a73d75f4293c1b1edcb5" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.074497 4739 scope.go:117] "RemoveContainer" containerID="f594884fb4b83b0c04ce8bf8aae7f920c402fcb97cae39a2f4cf017d5bf71b59" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.164021 4739 scope.go:117] "RemoveContainer" containerID="040eeb174e895a0add4ac74007d11ab4b4e0bb01f7764fd5d6eff38c7db3910b" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.210646 4739 scope.go:117] "RemoveContainer" containerID="0d27470aa9ffe633d4b6a23a81a92ae2b802439fbedd1d4e1b5cb7aad209d3a5" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.287159 4739 scope.go:117] "RemoveContainer" containerID="76d32868e66155322323110ff775c5fb0e6f82fae8441ced2e3f98e4b9321c1d" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.329138 4739 scope.go:117] "RemoveContainer" containerID="0d326d9bd65ce654fe1a2b264586d9b66aecc19bd475abfcd3d94ee3f6d660d5" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.387389 4739 scope.go:117] "RemoveContainer" containerID="b71e725f96b6406936744325d7c950ca7ac36b206c41fc8ca5c6914fe0564b72" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.415200 4739 scope.go:117] "RemoveContainer" containerID="983f1c80cf67be3eed058f21350cec25209804a043b4033e89a7b4a7d1a23683" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.438939 4739 scope.go:117] "RemoveContainer" containerID="6e0f8193aeee1a9fde88a87836367d413530c7cef69dff31c0125463693bc71d" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.462641 4739 scope.go:117] "RemoveContainer" containerID="a765ba1e358815d14c909f560cbad1d380538cd7c1dacb154a2b8d05f4b98d09" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.488909 4739 scope.go:117] "RemoveContainer" containerID="e1cc91021e3962c425b43e910f166ba0094177006eafab98477f0ed269daa076" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.518986 4739 scope.go:117] "RemoveContainer" containerID="52da9b09d947fe24144c6c47d6f9580445b80136111737b82302681aad3a5631" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.549488 4739 scope.go:117] "RemoveContainer" containerID="06c6fe02fa56ef5594d8d43926f6b44f805a40324d87581600b0c88cf5d2d444" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.571743 4739 scope.go:117] "RemoveContainer" containerID="2f8b36ebc50069dffafc10ad5580f0650c3a5e44aee32de71fb90f645671e661" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.610504 4739 scope.go:117] "RemoveContainer" containerID="b43639724ef806f70a0570b3c7861b506614a00a4a43b0f7196363d0163afa24" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.649379 4739 scope.go:117] "RemoveContainer" containerID="6e738a7131fce65327168b727257db46debba0b3633c57a8a9e6484d2f38829f" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.710088 4739 scope.go:117] "RemoveContainer" containerID="aa9ecd9df38cda3b827f1db0a7848f77cc373ad0ddebd313df697a0b9ff36e7e" Feb 18 14:30:34 crc kubenswrapper[4739]: I0218 14:30:34.730417 4739 scope.go:117] "RemoveContainer" containerID="fad628d0c641c2b53d938feaf95bc1f324bbe0db103093a12604f18fd9eafc41" Feb 18 14:30:38 crc kubenswrapper[4739]: I0218 14:30:38.071620 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:30:38 crc kubenswrapper[4739]: I0218 14:30:38.099679 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gsm82"] Feb 18 14:30:38 crc kubenswrapper[4739]: I0218 14:30:38.422657 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbeb37ff-68ee-4cc5-add5-18fc25605b6f" path="/var/lib/kubelet/pods/dbeb37ff-68ee-4cc5-add5-18fc25605b6f/volumes" Feb 18 14:31:29 crc kubenswrapper[4739]: I0218 14:31:29.373349 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:31:29 crc kubenswrapper[4739]: I0218 14:31:29.374380 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:31:30 crc kubenswrapper[4739]: I0218 14:31:30.060666 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:31:30 crc kubenswrapper[4739]: I0218 14:31:30.072158 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-42sfc"] Feb 18 14:31:30 crc kubenswrapper[4739]: I0218 14:31:30.424727 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c42d996-bf46-4e69-892f-c720a9bce282" path="/var/lib/kubelet/pods/0c42d996-bf46-4e69-892f-c720a9bce282/volumes" Feb 18 14:31:32 crc kubenswrapper[4739]: I0218 14:31:32.033729 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:31:32 crc kubenswrapper[4739]: I0218 14:31:32.046935 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-q58nf"] Feb 18 14:31:32 crc kubenswrapper[4739]: I0218 14:31:32.423373 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc" path="/var/lib/kubelet/pods/f2b3b3ed-d6c1-4c2b-9431-30c9e89068cc/volumes" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.177712 4739 scope.go:117] "RemoveContainer" containerID="d755d74166c084972a673dd411c3ae3925155e88943bb67d4481d42cff283489" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.211821 4739 scope.go:117] "RemoveContainer" containerID="2cf4cbe6ff09b90a4081b821121e04359d9724929504c9ff576ebbffcc98ba2d" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.257428 4739 scope.go:117] "RemoveContainer" containerID="eb767b246d01786ba7d5e7aea0f8547789de5633ab93f7984d8f9084bda9cde1" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.278748 4739 scope.go:117] "RemoveContainer" containerID="6c0ee0eafacbca4301c6ded44d73ba09227c9ee1f2e6957623ca4214bd62e5df" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.338860 4739 scope.go:117] "RemoveContainer" containerID="008998419ac3a845430a1074a96b3f7b5b4ba5a04964c1bb0ae62e1f93981104" Feb 18 14:31:35 crc kubenswrapper[4739]: I0218 14:31:35.391511 4739 scope.go:117] "RemoveContainer" containerID="331132c24f3ac7a502d7f3f575324d2550d00d5e32f94df80daa161182a3e385" Feb 18 14:31:39 crc kubenswrapper[4739]: I0218 14:31:39.084242 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:31:39 crc kubenswrapper[4739]: I0218 14:31:39.096738 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h5s86"] Feb 18 14:31:40 crc kubenswrapper[4739]: I0218 14:31:40.427960 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8" path="/var/lib/kubelet/pods/a6917d6e-a9ab-4381-ae7f-1f0d0cbfc6f8/volumes" Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.053802 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.066734 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.082010 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hm27f"] Feb 18 14:31:43 crc kubenswrapper[4739]: I0218 14:31:43.094726 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hc8hk"] Feb 18 14:31:44 crc kubenswrapper[4739]: I0218 14:31:44.423889 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d77527-a940-4423-ac63-4a7cdf366510" path="/var/lib/kubelet/pods/51d77527-a940-4423-ac63-4a7cdf366510/volumes" Feb 18 14:31:44 crc kubenswrapper[4739]: I0218 14:31:44.425870 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3697715-3f94-4086-99ab-65a492bd7542" path="/var/lib/kubelet/pods/b3697715-3f94-4086-99ab-65a492bd7542/volumes" Feb 18 14:31:59 crc kubenswrapper[4739]: I0218 14:31:59.372838 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:31:59 crc kubenswrapper[4739]: I0218 14:31:59.373424 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.373297 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.373911 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.373956 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.375274 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.375352 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334" gracePeriod=600 Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.875334 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334" exitCode=0 Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.875428 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334"} Feb 18 14:32:29 crc kubenswrapper[4739]: I0218 14:32:29.875728 4739 scope.go:117] "RemoveContainer" containerID="1ed71aaebbed6445845cf4b8646f6889ef5723286d20e83fe19bd5985f91b124" Feb 18 14:32:30 crc kubenswrapper[4739]: I0218 14:32:30.888498 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934"} Feb 18 14:32:35 crc kubenswrapper[4739]: I0218 14:32:35.567974 4739 scope.go:117] "RemoveContainer" containerID="615daa9d2c89107b5d8baf69578eb811649ddb2693aedf9b046cefb6786b3af5" Feb 18 14:32:35 crc kubenswrapper[4739]: I0218 14:32:35.613356 4739 scope.go:117] "RemoveContainer" containerID="13f81a775889f6ea108dde89cc1b11f4232f55a79b2165f0775cd5d113f547b2" Feb 18 14:32:35 crc kubenswrapper[4739]: I0218 14:32:35.684362 4739 scope.go:117] "RemoveContainer" containerID="d0d344e509459df1445da7eae6edf0b5c1a43772e911ac197e49dc6ffc6fe7a4" Feb 18 14:32:37 crc kubenswrapper[4739]: I0218 14:32:37.961474 4739 generic.go:334] "Generic (PLEG): container finished" podID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerID="e36e9cea2e9509acd37c756569ccb607ef32b0c6a6cd144b690231f1e10fd4d3" exitCode=0 Feb 18 14:32:37 crc kubenswrapper[4739]: I0218 14:32:37.961581 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerDied","Data":"e36e9cea2e9509acd37c756569ccb607ef32b0c6a6cd144b690231f1e10fd4d3"} Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.479922 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.668911 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") pod \"ed059e6b-2560-487a-98a8-c1443d31cca9\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.668993 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") pod \"ed059e6b-2560-487a-98a8-c1443d31cca9\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.669212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") pod \"ed059e6b-2560-487a-98a8-c1443d31cca9\" (UID: \"ed059e6b-2560-487a-98a8-c1443d31cca9\") " Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.682805 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd" (OuterVolumeSpecName: "kube-api-access-lfhkd") pod "ed059e6b-2560-487a-98a8-c1443d31cca9" (UID: "ed059e6b-2560-487a-98a8-c1443d31cca9"). InnerVolumeSpecName "kube-api-access-lfhkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.699975 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed059e6b-2560-487a-98a8-c1443d31cca9" (UID: "ed059e6b-2560-487a-98a8-c1443d31cca9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.709983 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory" (OuterVolumeSpecName: "inventory") pod "ed059e6b-2560-487a-98a8-c1443d31cca9" (UID: "ed059e6b-2560-487a-98a8-c1443d31cca9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.774087 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.774121 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfhkd\" (UniqueName: \"kubernetes.io/projected/ed059e6b-2560-487a-98a8-c1443d31cca9-kube-api-access-lfhkd\") on node \"crc\" DevicePath \"\"" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.774131 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed059e6b-2560-487a-98a8-c1443d31cca9-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.989964 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" event={"ID":"ed059e6b-2560-487a-98a8-c1443d31cca9","Type":"ContainerDied","Data":"82a6e9a5f9c5c80c3e4624efd3163c459809aa077df1e7712fd32ff2f63f2eaa"} Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.990027 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a6e9a5f9c5c80c3e4624efd3163c459809aa077df1e7712fd32ff2f63f2eaa" Feb 18 14:32:39 crc kubenswrapper[4739]: I0218 14:32:39.990217 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.088567 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j"] Feb 18 14:32:40 crc kubenswrapper[4739]: E0218 14:32:40.089182 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.089212 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.089560 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed059e6b-2560-487a-98a8-c1443d31cca9" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.090596 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.093490 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.093752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.093969 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.096760 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.105984 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j"] Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.194424 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.194820 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.194926 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.297097 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.297153 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.297275 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.301368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.313804 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.319690 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-74l2j\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.420904 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:32:40 crc kubenswrapper[4739]: I0218 14:32:40.962481 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j"] Feb 18 14:32:41 crc kubenswrapper[4739]: I0218 14:32:41.000682 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerStarted","Data":"fa05a5bdd5eb8aa6517618b2cc6b129b18c332acdce8ab6cf85adb799214f4aa"} Feb 18 14:32:42 crc kubenswrapper[4739]: I0218 14:32:42.013471 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerStarted","Data":"89e42e2a936eab142fac63aa2f66623e2e2cd57a28bd3401e4bd7c0a325f8fa0"} Feb 18 14:32:42 crc kubenswrapper[4739]: I0218 14:32:42.031315 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" podStartSLOduration=1.491547416 podStartE2EDuration="2.031293874s" podCreationTimestamp="2026-02-18 14:32:40 +0000 UTC" firstStartedPulling="2026-02-18 14:32:40.967316273 +0000 UTC m=+1993.463037195" lastFinishedPulling="2026-02-18 14:32:41.507062731 +0000 UTC m=+1994.002783653" observedRunningTime="2026-02-18 14:32:42.027501235 +0000 UTC m=+1994.523222177" watchObservedRunningTime="2026-02-18 14:32:42.031293874 +0000 UTC m=+1994.527014796" Feb 18 14:32:50 crc kubenswrapper[4739]: I0218 14:32:50.044833 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:32:50 crc kubenswrapper[4739]: I0218 14:32:50.057976 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-frlf8"] Feb 18 14:32:50 crc kubenswrapper[4739]: I0218 14:32:50.423705 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="290b50b0-4283-4a40-b694-4a5f18b39b1a" path="/var/lib/kubelet/pods/290b50b0-4283-4a40-b694-4a5f18b39b1a/volumes" Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.037629 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.052992 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.065932 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.076662 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.089104 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.114695 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6q6nn"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.128223 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8ab4-account-create-update-zkq89"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.139113 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-022d-account-create-update-6krg8"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.149635 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-04e8-account-create-update-9qcd6"] Feb 18 14:32:51 crc kubenswrapper[4739]: I0218 14:32:51.158271 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-79vbk"] Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.425268 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd" path="/var/lib/kubelet/pods/1a5b6ee8-c3fa-4e1f-b8fe-33da9a0f70dd/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.426516 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f229688-5021-4d28-9109-98071744a102" path="/var/lib/kubelet/pods/1f229688-5021-4d28-9109-98071744a102/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.430010 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429115da-eb66-4dc9-9210-86cd0525a6cf" path="/var/lib/kubelet/pods/429115da-eb66-4dc9-9210-86cd0525a6cf/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.430702 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33399d1-a28e-4e19-aba8-a218018e5e8b" path="/var/lib/kubelet/pods/c33399d1-a28e-4e19-aba8-a218018e5e8b/volumes" Feb 18 14:32:52 crc kubenswrapper[4739]: I0218 14:32:52.431248 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f689babc-92f9-4e45-8fb3-40722e18cd10" path="/var/lib/kubelet/pods/f689babc-92f9-4e45-8fb3-40722e18cd10/volumes" Feb 18 14:33:33 crc kubenswrapper[4739]: I0218 14:33:33.050379 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:33:33 crc kubenswrapper[4739]: I0218 14:33:33.062447 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xfg9d"] Feb 18 14:33:34 crc kubenswrapper[4739]: I0218 14:33:34.445547 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed7afcd-a9be-4c59-836d-355e4c502a01" path="/var/lib/kubelet/pods/2ed7afcd-a9be-4c59-836d-355e4c502a01/volumes" Feb 18 14:33:35 crc kubenswrapper[4739]: I0218 14:33:35.840430 4739 scope.go:117] "RemoveContainer" containerID="cd193d9c848f0cb5846f4803a361ea578be3e4975f2d687992d1efc73cd54125" Feb 18 14:33:35 crc kubenswrapper[4739]: I0218 14:33:35.887084 4739 scope.go:117] "RemoveContainer" containerID="d354c12b67eababcd672627661526374e41cf79bf2c5f51fc2d961512732ad80" Feb 18 14:33:35 crc kubenswrapper[4739]: I0218 14:33:35.931920 4739 scope.go:117] "RemoveContainer" containerID="f180991429bb7c01f25e8e0932cfc4a2c2e639764155f5051da2395874ce4177" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.011077 4739 scope.go:117] "RemoveContainer" containerID="c294346ed483351749b57b335bfd04c525dff76c2eb0efbc4e1ea2d1c1b22ce8" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.128885 4739 scope.go:117] "RemoveContainer" containerID="f2e4b9fb06b8dfc6962768e47edc73a399125a6a5af8a24a17fe6e665b490f62" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.191617 4739 scope.go:117] "RemoveContainer" containerID="7decdedc36c29035cbd6c5768e12052f73ae02bcfb7ff083bd55e7ded7c3ba91" Feb 18 14:33:36 crc kubenswrapper[4739]: I0218 14:33:36.240001 4739 scope.go:117] "RemoveContainer" containerID="164ed4c991352152994d527ba5112c6e7d1903b4f2261af5e3d479652dee7c0f" Feb 18 14:33:44 crc kubenswrapper[4739]: I0218 14:33:44.045193 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:33:44 crc kubenswrapper[4739]: I0218 14:33:44.055722 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-55b1-account-create-update-rl2bd"] Feb 18 14:33:44 crc kubenswrapper[4739]: I0218 14:33:44.428059 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7351c0c9-c9c1-474c-a9cc-cde24bd45dfa" path="/var/lib/kubelet/pods/7351c0c9-c9c1-474c-a9cc-cde24bd45dfa/volumes" Feb 18 14:33:45 crc kubenswrapper[4739]: I0218 14:33:45.034670 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:33:45 crc kubenswrapper[4739]: I0218 14:33:45.044869 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-zmb2f"] Feb 18 14:33:46 crc kubenswrapper[4739]: I0218 14:33:46.441345 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4445c84e-2108-44e0-a46e-673fe0858df3" path="/var/lib/kubelet/pods/4445c84e-2108-44e0-a46e-673fe0858df3/volumes" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.461881 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.466543 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.478186 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.583055 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-catalog-content\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.583591 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-utilities\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.583643 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhkz\" (UniqueName: \"kubernetes.io/projected/c2f46b1c-aab8-49aa-936d-40da9b28333b-kube-api-access-lqhkz\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.685839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-utilities\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.685949 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhkz\" (UniqueName: \"kubernetes.io/projected/c2f46b1c-aab8-49aa-936d-40da9b28333b-kube-api-access-lqhkz\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.686036 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-catalog-content\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.686336 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-utilities\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.686603 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2f46b1c-aab8-49aa-936d-40da9b28333b-catalog-content\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.705251 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhkz\" (UniqueName: \"kubernetes.io/projected/c2f46b1c-aab8-49aa-936d-40da9b28333b-kube-api-access-lqhkz\") pod \"redhat-operators-hvzqm\" (UID: \"c2f46b1c-aab8-49aa-936d-40da9b28333b\") " pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:49 crc kubenswrapper[4739]: I0218 14:33:49.788915 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:33:50 crc kubenswrapper[4739]: I0218 14:33:50.330538 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:33:50 crc kubenswrapper[4739]: I0218 14:33:50.732187 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"444fa77d8c7d241ff0c97a4f96d30c1d73837e4032a3356b79e27ccd6961d7ea"} Feb 18 14:33:50 crc kubenswrapper[4739]: I0218 14:33:50.732227 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"b92690c72462eba244e27a6cbf4928687a786ba839112f5863cebe7a7538bd7c"} Feb 18 14:33:51 crc kubenswrapper[4739]: I0218 14:33:51.745227 4739 generic.go:334] "Generic (PLEG): container finished" podID="c2f46b1c-aab8-49aa-936d-40da9b28333b" containerID="444fa77d8c7d241ff0c97a4f96d30c1d73837e4032a3356b79e27ccd6961d7ea" exitCode=0 Feb 18 14:33:51 crc kubenswrapper[4739]: I0218 14:33:51.745318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerDied","Data":"444fa77d8c7d241ff0c97a4f96d30c1d73837e4032a3356b79e27ccd6961d7ea"} Feb 18 14:33:54 crc kubenswrapper[4739]: I0218 14:33:54.789222 4739 generic.go:334] "Generic (PLEG): container finished" podID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerID="89e42e2a936eab142fac63aa2f66623e2e2cd57a28bd3401e4bd7c0a325f8fa0" exitCode=0 Feb 18 14:33:54 crc kubenswrapper[4739]: I0218 14:33:54.789318 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerDied","Data":"89e42e2a936eab142fac63aa2f66623e2e2cd57a28bd3401e4bd7c0a325f8fa0"} Feb 18 14:33:58 crc kubenswrapper[4739]: I0218 14:33:58.032830 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:33:58 crc kubenswrapper[4739]: I0218 14:33:58.048682 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ldxnr"] Feb 18 14:33:58 crc kubenswrapper[4739]: I0218 14:33:58.428370 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f44227f-28d1-4aaf-9133-c4560b893022" path="/var/lib/kubelet/pods/5f44227f-28d1-4aaf-9133-c4560b893022/volumes" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.209716 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.301615 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") pod \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.302100 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") pod \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.310163 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n" (OuterVolumeSpecName: "kube-api-access-kf45n") pod "c3fe82f6-0603-44f2-95fa-57ce24505d2c" (UID: "c3fe82f6-0603-44f2-95fa-57ce24505d2c"). InnerVolumeSpecName "kube-api-access-kf45n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.364250 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory" (OuterVolumeSpecName: "inventory") pod "c3fe82f6-0603-44f2-95fa-57ce24505d2c" (UID: "c3fe82f6-0603-44f2-95fa-57ce24505d2c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.403526 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") pod \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\" (UID: \"c3fe82f6-0603-44f2-95fa-57ce24505d2c\") " Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.403888 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf45n\" (UniqueName: \"kubernetes.io/projected/c3fe82f6-0603-44f2-95fa-57ce24505d2c-kube-api-access-kf45n\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.403905 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.444692 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3fe82f6-0603-44f2-95fa-57ce24505d2c" (UID: "c3fe82f6-0603-44f2-95fa-57ce24505d2c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.505424 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3fe82f6-0603-44f2-95fa-57ce24505d2c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.855429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" event={"ID":"c3fe82f6-0603-44f2-95fa-57ce24505d2c","Type":"ContainerDied","Data":"fa05a5bdd5eb8aa6517618b2cc6b129b18c332acdce8ab6cf85adb799214f4aa"} Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.855885 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa05a5bdd5eb8aa6517618b2cc6b129b18c332acdce8ab6cf85adb799214f4aa" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.855516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-74l2j" Feb 18 14:34:00 crc kubenswrapper[4739]: I0218 14:34:00.858562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"ba704138dbf39216d74ce1e1897b73f874d3997ca0fb6a822f58f7e5a0210e33"} Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.362637 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh"] Feb 18 14:34:01 crc kubenswrapper[4739]: E0218 14:34:01.363408 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.363429 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.363893 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3fe82f6-0603-44f2-95fa-57ce24505d2c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.365285 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.370294 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.370588 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.370752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.371279 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.372107 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh"] Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.531650 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.532339 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.532577 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.633766 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.633862 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.634755 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.647198 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.648194 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.654136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:01 crc kubenswrapper[4739]: I0218 14:34:01.732217 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.299537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh"] Feb 18 14:34:02 crc kubenswrapper[4739]: W0218 14:34:02.331300 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod884f40e4_492b_4f73_94a7_8be81bde150e.slice/crio-cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77 WatchSource:0}: Error finding container cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77: Status 404 returned error can't find the container with id cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77 Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.886015 4739 generic.go:334] "Generic (PLEG): container finished" podID="c2f46b1c-aab8-49aa-936d-40da9b28333b" containerID="ba704138dbf39216d74ce1e1897b73f874d3997ca0fb6a822f58f7e5a0210e33" exitCode=0 Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.886093 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerDied","Data":"ba704138dbf39216d74ce1e1897b73f874d3997ca0fb6a822f58f7e5a0210e33"} Feb 18 14:34:02 crc kubenswrapper[4739]: I0218 14:34:02.889519 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerStarted","Data":"cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77"} Feb 18 14:34:03 crc kubenswrapper[4739]: I0218 14:34:03.901383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerStarted","Data":"18cade01ff342ab3b70b3ed35d174da6101ffd51f6ac4470478bce89a45f0e5c"} Feb 18 14:34:03 crc kubenswrapper[4739]: I0218 14:34:03.930849 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" podStartSLOduration=2.525927796 podStartE2EDuration="2.930828365s" podCreationTimestamp="2026-02-18 14:34:01 +0000 UTC" firstStartedPulling="2026-02-18 14:34:02.335108377 +0000 UTC m=+2074.830829299" lastFinishedPulling="2026-02-18 14:34:02.740008946 +0000 UTC m=+2075.235729868" observedRunningTime="2026-02-18 14:34:03.918645544 +0000 UTC m=+2076.414366476" watchObservedRunningTime="2026-02-18 14:34:03.930828365 +0000 UTC m=+2076.426549307" Feb 18 14:34:04 crc kubenswrapper[4739]: I0218 14:34:04.922595 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvzqm" event={"ID":"c2f46b1c-aab8-49aa-936d-40da9b28333b","Type":"ContainerStarted","Data":"d023c80406c03a6201ba40309856fb155c13a9f51b1123ea61496bb3dca72e55"} Feb 18 14:34:04 crc kubenswrapper[4739]: I0218 14:34:04.961947 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hvzqm" podStartSLOduration=2.750166928 podStartE2EDuration="15.961927353s" podCreationTimestamp="2026-02-18 14:33:49 +0000 UTC" firstStartedPulling="2026-02-18 14:33:50.734292898 +0000 UTC m=+2063.230013820" lastFinishedPulling="2026-02-18 14:34:03.946053313 +0000 UTC m=+2076.441774245" observedRunningTime="2026-02-18 14:34:04.954541165 +0000 UTC m=+2077.450262087" watchObservedRunningTime="2026-02-18 14:34:04.961927353 +0000 UTC m=+2077.457648275" Feb 18 14:34:08 crc kubenswrapper[4739]: I0218 14:34:08.964420 4739 generic.go:334] "Generic (PLEG): container finished" podID="884f40e4-492b-4f73-94a7-8be81bde150e" containerID="18cade01ff342ab3b70b3ed35d174da6101ffd51f6ac4470478bce89a45f0e5c" exitCode=0 Feb 18 14:34:08 crc kubenswrapper[4739]: I0218 14:34:08.964484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerDied","Data":"18cade01ff342ab3b70b3ed35d174da6101ffd51f6ac4470478bce89a45f0e5c"} Feb 18 14:34:09 crc kubenswrapper[4739]: I0218 14:34:09.789274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:09 crc kubenswrapper[4739]: I0218 14:34:09.789351 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.524223 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.586358 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") pod \"884f40e4-492b-4f73-94a7-8be81bde150e\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.586469 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") pod \"884f40e4-492b-4f73-94a7-8be81bde150e\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.586708 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") pod \"884f40e4-492b-4f73-94a7-8be81bde150e\" (UID: \"884f40e4-492b-4f73-94a7-8be81bde150e\") " Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.596473 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj" (OuterVolumeSpecName: "kube-api-access-h48dj") pod "884f40e4-492b-4f73-94a7-8be81bde150e" (UID: "884f40e4-492b-4f73-94a7-8be81bde150e"). InnerVolumeSpecName "kube-api-access-h48dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.621290 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "884f40e4-492b-4f73-94a7-8be81bde150e" (UID: "884f40e4-492b-4f73-94a7-8be81bde150e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.622894 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory" (OuterVolumeSpecName: "inventory") pod "884f40e4-492b-4f73-94a7-8be81bde150e" (UID: "884f40e4-492b-4f73-94a7-8be81bde150e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.689279 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.689318 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/884f40e4-492b-4f73-94a7-8be81bde150e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.689331 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h48dj\" (UniqueName: \"kubernetes.io/projected/884f40e4-492b-4f73-94a7-8be81bde150e-kube-api-access-h48dj\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:10 crc kubenswrapper[4739]: I0218 14:34:10.845387 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hvzqm" podUID="c2f46b1c-aab8-49aa-936d-40da9b28333b" containerName="registry-server" probeResult="failure" output=< Feb 18 14:34:10 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:34:10 crc kubenswrapper[4739]: > Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.008838 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" event={"ID":"884f40e4-492b-4f73-94a7-8be81bde150e","Type":"ContainerDied","Data":"cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77"} Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.008881 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd9d53b363dce83b68215d93327e8470730dda6ba10badc38fada163fc00ac77" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.008898 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.106580 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4"] Feb 18 14:34:11 crc kubenswrapper[4739]: E0218 14:34:11.107189 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884f40e4-492b-4f73-94a7-8be81bde150e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.107210 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="884f40e4-492b-4f73-94a7-8be81bde150e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.107470 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="884f40e4-492b-4f73-94a7-8be81bde150e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.108344 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.112372 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.112377 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.112934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.115994 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.119588 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4"] Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.198666 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.199107 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.199134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.301900 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.301950 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.302268 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.317630 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.319541 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.321655 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vglv4\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.428056 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:11 crc kubenswrapper[4739]: I0218 14:34:11.979067 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4"] Feb 18 14:34:12 crc kubenswrapper[4739]: I0218 14:34:12.020594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerStarted","Data":"a92857f694cb927f3dc4da0302205ce4b34bcfd3c096ddbd31f0a6194971d241"} Feb 18 14:34:13 crc kubenswrapper[4739]: I0218 14:34:13.034651 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerStarted","Data":"a0a591d66554e3b571681de69c76547a7c3bea060d3f5e4c4e82aa59c580c103"} Feb 18 14:34:13 crc kubenswrapper[4739]: I0218 14:34:13.068993 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" podStartSLOduration=1.643496917 podStartE2EDuration="2.06897005s" podCreationTimestamp="2026-02-18 14:34:11 +0000 UTC" firstStartedPulling="2026-02-18 14:34:11.986689737 +0000 UTC m=+2084.482410659" lastFinishedPulling="2026-02-18 14:34:12.41216287 +0000 UTC m=+2084.907883792" observedRunningTime="2026-02-18 14:34:13.054172853 +0000 UTC m=+2085.549893875" watchObservedRunningTime="2026-02-18 14:34:13.06897005 +0000 UTC m=+2085.564690972" Feb 18 14:34:19 crc kubenswrapper[4739]: I0218 14:34:19.843563 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:19 crc kubenswrapper[4739]: I0218 14:34:19.903246 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hvzqm" Feb 18 14:34:20 crc kubenswrapper[4739]: I0218 14:34:20.507156 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvzqm"] Feb 18 14:34:20 crc kubenswrapper[4739]: I0218 14:34:20.667485 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:34:20 crc kubenswrapper[4739]: I0218 14:34:20.667783 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n5478" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" containerID="cri-o://65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64" gracePeriod=2 Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.141821 4739 generic.go:334] "Generic (PLEG): container finished" podID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerID="65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64" exitCode=0 Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.142087 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64"} Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.285063 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.461197 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") pod \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.461254 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") pod \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.461296 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") pod \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\" (UID: \"6eb612bd-4974-4e9b-91d7-0240ce057aa5\") " Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.465107 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities" (OuterVolumeSpecName: "utilities") pod "6eb612bd-4974-4e9b-91d7-0240ce057aa5" (UID: "6eb612bd-4974-4e9b-91d7-0240ce057aa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.492903 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj" (OuterVolumeSpecName: "kube-api-access-zzvjj") pod "6eb612bd-4974-4e9b-91d7-0240ce057aa5" (UID: "6eb612bd-4974-4e9b-91d7-0240ce057aa5"). InnerVolumeSpecName "kube-api-access-zzvjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.565220 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.565493 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzvjj\" (UniqueName: \"kubernetes.io/projected/6eb612bd-4974-4e9b-91d7-0240ce057aa5-kube-api-access-zzvjj\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.661311 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb612bd-4974-4e9b-91d7-0240ce057aa5" (UID: "6eb612bd-4974-4e9b-91d7-0240ce057aa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:34:21 crc kubenswrapper[4739]: I0218 14:34:21.667505 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb612bd-4974-4e9b-91d7-0240ce057aa5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.156972 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5478" event={"ID":"6eb612bd-4974-4e9b-91d7-0240ce057aa5","Type":"ContainerDied","Data":"81b46654edd19d1432b58f9bd2576a94f39cc05f5d205ae85216f27b952d6aca"} Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.157039 4739 scope.go:117] "RemoveContainer" containerID="65422be5444c8a4ea68ae396ec7f1c722474a478587aebd1878eee8ec7e12e64" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.157096 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5478" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.185684 4739 scope.go:117] "RemoveContainer" containerID="eb5f5e626edf6dc5aeeea1562bacf9b30a38b08f9a8a02a3adf3e93c88281a22" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.208860 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.222969 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n5478"] Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.226625 4739 scope.go:117] "RemoveContainer" containerID="cd68ab8027f647103dec3361912c6740c7fe91057ba0556d4d221b3bd0864eff" Feb 18 14:34:22 crc kubenswrapper[4739]: I0218 14:34:22.422415 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" path="/var/lib/kubelet/pods/6eb612bd-4974-4e9b-91d7-0240ce057aa5/volumes" Feb 18 14:34:24 crc kubenswrapper[4739]: I0218 14:34:24.049211 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:34:24 crc kubenswrapper[4739]: I0218 14:34:24.068219 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7d9ft"] Feb 18 14:34:24 crc kubenswrapper[4739]: I0218 14:34:24.425076 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d2e1ea-d8fe-4724-becf-0a53840d8b5c" path="/var/lib/kubelet/pods/d4d2e1ea-d8fe-4724-becf-0a53840d8b5c/volumes" Feb 18 14:34:29 crc kubenswrapper[4739]: I0218 14:34:29.373146 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:34:29 crc kubenswrapper[4739]: I0218 14:34:29.373862 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.413632 4739 scope.go:117] "RemoveContainer" containerID="633345116a43d3ca8fa44023cd81269b98b8fe89948eab70d0c8a2b4002309e9" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.470038 4739 scope.go:117] "RemoveContainer" containerID="c6cce8603450086875d16ae66c0fe0efdc54a90290fdaaf6cec216bd19489355" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.527556 4739 scope.go:117] "RemoveContainer" containerID="f654a93fc558fd96d5cdb40c4eb8145a76ceb6daf5c1d8dd83b579ef3e4f1ae6" Feb 18 14:34:36 crc kubenswrapper[4739]: I0218 14:34:36.585064 4739 scope.go:117] "RemoveContainer" containerID="67951a3352fb939ea45b17ca75ec53a682c20dd4d63961be0be0da15f32b4807" Feb 18 14:34:44 crc kubenswrapper[4739]: I0218 14:34:44.051940 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:34:44 crc kubenswrapper[4739]: I0218 14:34:44.063053 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mvdqm"] Feb 18 14:34:44 crc kubenswrapper[4739]: I0218 14:34:44.425478 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147cff80-30af-4fc7-961f-5f6e17af51bb" path="/var/lib/kubelet/pods/147cff80-30af-4fc7-961f-5f6e17af51bb/volumes" Feb 18 14:34:46 crc kubenswrapper[4739]: I0218 14:34:46.422151 4739 generic.go:334] "Generic (PLEG): container finished" podID="af925314-bcd8-4373-b57e-612251a9687a" containerID="a0a591d66554e3b571681de69c76547a7c3bea060d3f5e4c4e82aa59c580c103" exitCode=0 Feb 18 14:34:46 crc kubenswrapper[4739]: I0218 14:34:46.423293 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerDied","Data":"a0a591d66554e3b571681de69c76547a7c3bea060d3f5e4c4e82aa59c580c103"} Feb 18 14:34:47 crc kubenswrapper[4739]: I0218 14:34:47.941360 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.103266 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") pod \"af925314-bcd8-4373-b57e-612251a9687a\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.103437 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") pod \"af925314-bcd8-4373-b57e-612251a9687a\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.103624 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") pod \"af925314-bcd8-4373-b57e-612251a9687a\" (UID: \"af925314-bcd8-4373-b57e-612251a9687a\") " Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.120936 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6" (OuterVolumeSpecName: "kube-api-access-6zdb6") pod "af925314-bcd8-4373-b57e-612251a9687a" (UID: "af925314-bcd8-4373-b57e-612251a9687a"). InnerVolumeSpecName "kube-api-access-6zdb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.138251 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af925314-bcd8-4373-b57e-612251a9687a" (UID: "af925314-bcd8-4373-b57e-612251a9687a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.150676 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory" (OuterVolumeSpecName: "inventory") pod "af925314-bcd8-4373-b57e-612251a9687a" (UID: "af925314-bcd8-4373-b57e-612251a9687a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.206352 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.206402 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdb6\" (UniqueName: \"kubernetes.io/projected/af925314-bcd8-4373-b57e-612251a9687a-kube-api-access-6zdb6\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.206417 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af925314-bcd8-4373-b57e-612251a9687a-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.445271 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" event={"ID":"af925314-bcd8-4373-b57e-612251a9687a","Type":"ContainerDied","Data":"a92857f694cb927f3dc4da0302205ce4b34bcfd3c096ddbd31f0a6194971d241"} Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.445529 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92857f694cb927f3dc4da0302205ce4b34bcfd3c096ddbd31f0a6194971d241" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.445368 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vglv4" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.536780 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24"] Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.537812 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af925314-bcd8-4373-b57e-612251a9687a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.537887 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="af925314-bcd8-4373-b57e-612251a9687a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.537953 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-content" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538037 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-content" Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.538110 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538176 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" Feb 18 14:34:48 crc kubenswrapper[4739]: E0218 14:34:48.538237 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-utilities" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538290 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="extract-utilities" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538603 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb612bd-4974-4e9b-91d7-0240ce057aa5" containerName="registry-server" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.538676 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="af925314-bcd8-4373-b57e-612251a9687a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.539594 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.545025 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.545788 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.548283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.560308 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.572167 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24"] Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.721263 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.722007 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.722402 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.825089 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.825223 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.825492 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.833405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.833588 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.851263 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9jq24\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:48 crc kubenswrapper[4739]: I0218 14:34:48.860137 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:34:49 crc kubenswrapper[4739]: I0218 14:34:49.460266 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24"] Feb 18 14:34:50 crc kubenswrapper[4739]: I0218 14:34:50.467564 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerStarted","Data":"d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d"} Feb 18 14:34:50 crc kubenswrapper[4739]: I0218 14:34:50.467864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerStarted","Data":"b3a34cc07895a1a091e25014f6442b4b884353bcc3deb836e1faf2cf43ee2571"} Feb 18 14:34:50 crc kubenswrapper[4739]: I0218 14:34:50.491515 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" podStartSLOduration=2.073546206 podStartE2EDuration="2.491489988s" podCreationTimestamp="2026-02-18 14:34:48 +0000 UTC" firstStartedPulling="2026-02-18 14:34:49.464092244 +0000 UTC m=+2121.959813166" lastFinishedPulling="2026-02-18 14:34:49.882036026 +0000 UTC m=+2122.377756948" observedRunningTime="2026-02-18 14:34:50.484995492 +0000 UTC m=+2122.980716414" watchObservedRunningTime="2026-02-18 14:34:50.491489988 +0000 UTC m=+2122.987210920" Feb 18 14:34:59 crc kubenswrapper[4739]: I0218 14:34:59.372997 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:34:59 crc kubenswrapper[4739]: I0218 14:34:59.373697 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.372860 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.373377 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.373503 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.374408 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.374482 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" gracePeriod=600 Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.897155 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" exitCode=0 Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.897229 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934"} Feb 18 14:35:29 crc kubenswrapper[4739]: I0218 14:35:29.897286 4739 scope.go:117] "RemoveContainer" containerID="eac2682f7b1c0ab63659ddee01f98f4f7cbae0ee5ed689e12d939bd80a710334" Feb 18 14:35:30 crc kubenswrapper[4739]: E0218 14:35:30.028529 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:30 crc kubenswrapper[4739]: I0218 14:35:30.910035 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:35:30 crc kubenswrapper[4739]: E0218 14:35:30.910668 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:35 crc kubenswrapper[4739]: E0218 14:35:35.830759 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8795d84c_3a90_438c_8f2b_066cd875316d.slice/crio-d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8795d84c_3a90_438c_8f2b_066cd875316d.slice/crio-conmon-d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d.scope\": RecentStats: unable to find data in memory cache]" Feb 18 14:35:35 crc kubenswrapper[4739]: I0218 14:35:35.960656 4739 generic.go:334] "Generic (PLEG): container finished" podID="8795d84c-3a90-438c-8f2b-066cd875316d" containerID="d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d" exitCode=0 Feb 18 14:35:35 crc kubenswrapper[4739]: I0218 14:35:35.960702 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerDied","Data":"d288b73919e9ab5a400a769557195ccb45adf86d031473821fca19cff0ad5b9d"} Feb 18 14:35:36 crc kubenswrapper[4739]: I0218 14:35:36.739534 4739 scope.go:117] "RemoveContainer" containerID="719754d11a438c2796a0ba11ae2f879324b6243f92382b8f8f42f425c9043930" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.545954 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.727811 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") pod \"8795d84c-3a90-438c-8f2b-066cd875316d\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.728871 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") pod \"8795d84c-3a90-438c-8f2b-066cd875316d\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.728932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") pod \"8795d84c-3a90-438c-8f2b-066cd875316d\" (UID: \"8795d84c-3a90-438c-8f2b-066cd875316d\") " Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.736199 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js" (OuterVolumeSpecName: "kube-api-access-hx2js") pod "8795d84c-3a90-438c-8f2b-066cd875316d" (UID: "8795d84c-3a90-438c-8f2b-066cd875316d"). InnerVolumeSpecName "kube-api-access-hx2js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.771093 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory" (OuterVolumeSpecName: "inventory") pod "8795d84c-3a90-438c-8f2b-066cd875316d" (UID: "8795d84c-3a90-438c-8f2b-066cd875316d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.771690 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8795d84c-3a90-438c-8f2b-066cd875316d" (UID: "8795d84c-3a90-438c-8f2b-066cd875316d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.831850 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.831885 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8795d84c-3a90-438c-8f2b-066cd875316d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.831913 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2js\" (UniqueName: \"kubernetes.io/projected/8795d84c-3a90-438c-8f2b-066cd875316d-kube-api-access-hx2js\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.981073 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" event={"ID":"8795d84c-3a90-438c-8f2b-066cd875316d","Type":"ContainerDied","Data":"b3a34cc07895a1a091e25014f6442b4b884353bcc3deb836e1faf2cf43ee2571"} Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.981404 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a34cc07895a1a091e25014f6442b4b884353bcc3deb836e1faf2cf43ee2571" Feb 18 14:35:37 crc kubenswrapper[4739]: I0218 14:35:37.981131 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9jq24" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.065382 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f68sz"] Feb 18 14:35:38 crc kubenswrapper[4739]: E0218 14:35:38.066035 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8795d84c-3a90-438c-8f2b-066cd875316d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.066060 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="8795d84c-3a90-438c-8f2b-066cd875316d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.066325 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="8795d84c-3a90-438c-8f2b-066cd875316d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.067255 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.077114 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f68sz"] Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.105961 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.106017 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.106191 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.106368 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.241034 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.241360 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.241633 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.344243 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.344421 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.344558 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.349016 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.349143 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.365844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"ssh-known-hosts-edpm-deployment-f68sz\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:38 crc kubenswrapper[4739]: I0218 14:35:38.443250 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:39 crc kubenswrapper[4739]: I0218 14:35:39.019932 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:35:39 crc kubenswrapper[4739]: I0218 14:35:39.024535 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f68sz"] Feb 18 14:35:40 crc kubenswrapper[4739]: I0218 14:35:40.003661 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerStarted","Data":"187f668701c65483c64a33bd8b966160759d19342dfbf99b15688b0475818667"} Feb 18 14:35:40 crc kubenswrapper[4739]: I0218 14:35:40.003904 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerStarted","Data":"50537900a47f8a7257258b9346ddc74b1ba2cdd5c32ed6b53de62959232116d6"} Feb 18 14:35:40 crc kubenswrapper[4739]: I0218 14:35:40.026234 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" podStartSLOduration=1.426029508 podStartE2EDuration="2.026208146s" podCreationTimestamp="2026-02-18 14:35:38 +0000 UTC" firstStartedPulling="2026-02-18 14:35:39.019163414 +0000 UTC m=+2171.514884336" lastFinishedPulling="2026-02-18 14:35:39.619342052 +0000 UTC m=+2172.115062974" observedRunningTime="2026-02-18 14:35:40.016990201 +0000 UTC m=+2172.512711133" watchObservedRunningTime="2026-02-18 14:35:40.026208146 +0000 UTC m=+2172.521929088" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.277045 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.287486 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.302136 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.320278 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.320465 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.320570 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.410628 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:35:44 crc kubenswrapper[4739]: E0218 14:35:44.411073 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423306 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423409 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423472 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423972 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.423992 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.446427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"redhat-marketplace-ct24c\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:44 crc kubenswrapper[4739]: I0218 14:35:44.621838 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:45 crc kubenswrapper[4739]: I0218 14:35:45.158930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:46 crc kubenswrapper[4739]: I0218 14:35:46.084503 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" exitCode=0 Feb 18 14:35:46 crc kubenswrapper[4739]: I0218 14:35:46.084832 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8"} Feb 18 14:35:46 crc kubenswrapper[4739]: I0218 14:35:46.084860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerStarted","Data":"e98eb27ec0270c6671d9c8a8131aa295c8aeb989346be9138e5f12fa0696debd"} Feb 18 14:35:47 crc kubenswrapper[4739]: I0218 14:35:47.103808 4739 generic.go:334] "Generic (PLEG): container finished" podID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerID="187f668701c65483c64a33bd8b966160759d19342dfbf99b15688b0475818667" exitCode=0 Feb 18 14:35:47 crc kubenswrapper[4739]: I0218 14:35:47.104199 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerDied","Data":"187f668701c65483c64a33bd8b966160759d19342dfbf99b15688b0475818667"} Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.137303 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerStarted","Data":"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4"} Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.674122 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.751889 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") pod \"63f139bc-490d-48b7-98c1-e29c8f583d90\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.752059 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") pod \"63f139bc-490d-48b7-98c1-e29c8f583d90\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.752133 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") pod \"63f139bc-490d-48b7-98c1-e29c8f583d90\" (UID: \"63f139bc-490d-48b7-98c1-e29c8f583d90\") " Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.776731 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv" (OuterVolumeSpecName: "kube-api-access-wbgfv") pod "63f139bc-490d-48b7-98c1-e29c8f583d90" (UID: "63f139bc-490d-48b7-98c1-e29c8f583d90"). InnerVolumeSpecName "kube-api-access-wbgfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.792990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "63f139bc-490d-48b7-98c1-e29c8f583d90" (UID: "63f139bc-490d-48b7-98c1-e29c8f583d90"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.793647 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "63f139bc-490d-48b7-98c1-e29c8f583d90" (UID: "63f139bc-490d-48b7-98c1-e29c8f583d90"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.861809 4739 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.861845 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63f139bc-490d-48b7-98c1-e29c8f583d90-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:48 crc kubenswrapper[4739]: I0218 14:35:48.861855 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgfv\" (UniqueName: \"kubernetes.io/projected/63f139bc-490d-48b7-98c1-e29c8f583d90-kube-api-access-wbgfv\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.150501 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" event={"ID":"63f139bc-490d-48b7-98c1-e29c8f583d90","Type":"ContainerDied","Data":"50537900a47f8a7257258b9346ddc74b1ba2cdd5c32ed6b53de62959232116d6"} Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.150909 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50537900a47f8a7257258b9346ddc74b1ba2cdd5c32ed6b53de62959232116d6" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.150752 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f68sz" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.153267 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" exitCode=0 Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.153313 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4"} Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.230000 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96"] Feb 18 14:35:49 crc kubenswrapper[4739]: E0218 14:35:49.231231 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerName="ssh-known-hosts-edpm-deployment" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.231263 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerName="ssh-known-hosts-edpm-deployment" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.231580 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f139bc-490d-48b7-98c1-e29c8f583d90" containerName="ssh-known-hosts-edpm-deployment" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.232575 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.238934 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.239180 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.239379 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.240110 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.276784 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96"] Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.374734 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.375138 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.375367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.478007 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.478127 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.478215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.483103 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.495854 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.496111 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jct96\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:49 crc kubenswrapper[4739]: I0218 14:35:49.571980 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.064022 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.067936 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.081369 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.103997 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.104100 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.104343 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.172998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerStarted","Data":"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6"} Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.202536 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ct24c" podStartSLOduration=2.71622132 podStartE2EDuration="6.202511499s" podCreationTimestamp="2026-02-18 14:35:44 +0000 UTC" firstStartedPulling="2026-02-18 14:35:46.087141987 +0000 UTC m=+2178.582862909" lastFinishedPulling="2026-02-18 14:35:49.573432166 +0000 UTC m=+2182.069153088" observedRunningTime="2026-02-18 14:35:50.196908306 +0000 UTC m=+2182.692629238" watchObservedRunningTime="2026-02-18 14:35:50.202511499 +0000 UTC m=+2182.698232421" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206214 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206303 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206355 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206855 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.206880 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.229671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"certified-operators-dk57d\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.279415 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96"] Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.399914 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:35:50 crc kubenswrapper[4739]: W0218 14:35:50.928341 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3294ebfc_1c27_44e3_a94e_ef98dfd9f0f1.slice/crio-cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef WatchSource:0}: Error finding container cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef: Status 404 returned error can't find the container with id cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef Feb 18 14:35:50 crc kubenswrapper[4739]: I0218 14:35:50.942083 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:35:51 crc kubenswrapper[4739]: I0218 14:35:51.196475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerStarted","Data":"2abea5bb56e874060956efd6c58905978721bda04f9962db60beb0ca3290a362"} Feb 18 14:35:51 crc kubenswrapper[4739]: I0218 14:35:51.199667 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerStarted","Data":"cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef"} Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.213967 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerStarted","Data":"0a36caa5a304b255bbea0df3251e633b5ea577e67c9aeae95277ec0d7d37b606"} Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.217790 4739 generic.go:334] "Generic (PLEG): container finished" podID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" exitCode=0 Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.217836 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b"} Feb 18 14:35:52 crc kubenswrapper[4739]: I0218 14:35:52.239748 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" podStartSLOduration=2.803240743 podStartE2EDuration="3.239726569s" podCreationTimestamp="2026-02-18 14:35:49 +0000 UTC" firstStartedPulling="2026-02-18 14:35:50.289944952 +0000 UTC m=+2182.785665874" lastFinishedPulling="2026-02-18 14:35:50.726430788 +0000 UTC m=+2183.222151700" observedRunningTime="2026-02-18 14:35:52.230602997 +0000 UTC m=+2184.726323929" watchObservedRunningTime="2026-02-18 14:35:52.239726569 +0000 UTC m=+2184.735447491" Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.237690 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerStarted","Data":"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e"} Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.622490 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.622554 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:54 crc kubenswrapper[4739]: I0218 14:35:54.673550 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:55 crc kubenswrapper[4739]: I0218 14:35:55.302695 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:56 crc kubenswrapper[4739]: I0218 14:35:56.259755 4739 generic.go:334] "Generic (PLEG): container finished" podID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" exitCode=0 Feb 18 14:35:56 crc kubenswrapper[4739]: I0218 14:35:56.259847 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e"} Feb 18 14:35:56 crc kubenswrapper[4739]: I0218 14:35:56.410646 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:35:56 crc kubenswrapper[4739]: E0218 14:35:56.411022 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.054217 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.272104 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerStarted","Data":"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949"} Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.272285 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ct24c" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" containerID="cri-o://e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" gracePeriod=2 Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.305713 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dk57d" podStartSLOduration=2.644625662 podStartE2EDuration="7.305692877s" podCreationTimestamp="2026-02-18 14:35:50 +0000 UTC" firstStartedPulling="2026-02-18 14:35:52.219889414 +0000 UTC m=+2184.715610336" lastFinishedPulling="2026-02-18 14:35:56.880956629 +0000 UTC m=+2189.376677551" observedRunningTime="2026-02-18 14:35:57.297751205 +0000 UTC m=+2189.793472137" watchObservedRunningTime="2026-02-18 14:35:57.305692877 +0000 UTC m=+2189.801413799" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.843860 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.911138 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") pod \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.911254 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") pod \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.911517 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") pod \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\" (UID: \"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0\") " Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.912185 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities" (OuterVolumeSpecName: "utilities") pod "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" (UID: "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.930949 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7" (OuterVolumeSpecName: "kube-api-access-htdm7") pod "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" (UID: "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0"). InnerVolumeSpecName "kube-api-access-htdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:35:57 crc kubenswrapper[4739]: I0218 14:35:57.957204 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" (UID: "7d3efb79-8fb2-4fea-adda-ac014c8ea1e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.014155 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdm7\" (UniqueName: \"kubernetes.io/projected/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-kube-api-access-htdm7\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.014189 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.014199 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284478 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" exitCode=0 Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284521 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6"} Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284554 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ct24c" event={"ID":"7d3efb79-8fb2-4fea-adda-ac014c8ea1e0","Type":"ContainerDied","Data":"e98eb27ec0270c6671d9c8a8131aa295c8aeb989346be9138e5f12fa0696debd"} Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284571 4739 scope.go:117] "RemoveContainer" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.284578 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ct24c" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.320125 4739 scope.go:117] "RemoveContainer" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.323572 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.336195 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ct24c"] Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.389139 4739 scope.go:117] "RemoveContainer" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.423208 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" path="/var/lib/kubelet/pods/7d3efb79-8fb2-4fea-adda-ac014c8ea1e0/volumes" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.462823 4739 scope.go:117] "RemoveContainer" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" Feb 18 14:35:58 crc kubenswrapper[4739]: E0218 14:35:58.463806 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6\": container with ID starting with e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6 not found: ID does not exist" containerID="e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.463860 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6"} err="failed to get container status \"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6\": rpc error: code = NotFound desc = could not find container \"e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6\": container with ID starting with e1f0f42c48aa40a9d80248b4c7fba2fc5a35472c8d0b6a99e1f7fb20836356a6 not found: ID does not exist" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.463894 4739 scope.go:117] "RemoveContainer" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" Feb 18 14:35:58 crc kubenswrapper[4739]: E0218 14:35:58.464348 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4\": container with ID starting with 098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4 not found: ID does not exist" containerID="098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.464382 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4"} err="failed to get container status \"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4\": rpc error: code = NotFound desc = could not find container \"098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4\": container with ID starting with 098a8ec67aae1154e60c0f157e3eaf1f5042c688dee3829fca52f6b5c3e393f4 not found: ID does not exist" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.464563 4739 scope.go:117] "RemoveContainer" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" Feb 18 14:35:58 crc kubenswrapper[4739]: E0218 14:35:58.464944 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8\": container with ID starting with 66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8 not found: ID does not exist" containerID="66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8" Feb 18 14:35:58 crc kubenswrapper[4739]: I0218 14:35:58.464974 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8"} err="failed to get container status \"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8\": rpc error: code = NotFound desc = could not find container \"66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8\": container with ID starting with 66485ad319c7fe9ceac1feb721af2a8819a801269e3a0d7964a88f55570cbed8 not found: ID does not exist" Feb 18 14:35:59 crc kubenswrapper[4739]: I0218 14:35:59.298462 4739 generic.go:334] "Generic (PLEG): container finished" podID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerID="0a36caa5a304b255bbea0df3251e633b5ea577e67c9aeae95277ec0d7d37b606" exitCode=0 Feb 18 14:35:59 crc kubenswrapper[4739]: I0218 14:35:59.298501 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerDied","Data":"0a36caa5a304b255bbea0df3251e633b5ea577e67c9aeae95277ec0d7d37b606"} Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.400041 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.400368 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.479494 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.804033 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.884561 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") pod \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.884750 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") pod \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.884850 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") pod \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\" (UID: \"18f01021-e95a-43e8-a660-1a2c9cb9d8c5\") " Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.889897 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5" (OuterVolumeSpecName: "kube-api-access-zd9s5") pod "18f01021-e95a-43e8-a660-1a2c9cb9d8c5" (UID: "18f01021-e95a-43e8-a660-1a2c9cb9d8c5"). InnerVolumeSpecName "kube-api-access-zd9s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.913022 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "18f01021-e95a-43e8-a660-1a2c9cb9d8c5" (UID: "18f01021-e95a-43e8-a660-1a2c9cb9d8c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.916992 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory" (OuterVolumeSpecName: "inventory") pod "18f01021-e95a-43e8-a660-1a2c9cb9d8c5" (UID: "18f01021-e95a-43e8-a660-1a2c9cb9d8c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.988892 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd9s5\" (UniqueName: \"kubernetes.io/projected/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-kube-api-access-zd9s5\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.988934 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:00 crc kubenswrapper[4739]: I0218 14:36:00.988947 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18f01021-e95a-43e8-a660-1a2c9cb9d8c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.338310 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" event={"ID":"18f01021-e95a-43e8-a660-1a2c9cb9d8c5","Type":"ContainerDied","Data":"2abea5bb56e874060956efd6c58905978721bda04f9962db60beb0ca3290a362"} Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.338643 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2abea5bb56e874060956efd6c58905978721bda04f9962db60beb0ca3290a362" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.338489 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jct96" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396180 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr"] Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396684 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396702 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396725 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396732 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396750 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-utilities" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396757 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-utilities" Feb 18 14:36:01 crc kubenswrapper[4739]: E0218 14:36:01.396776 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-content" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396781 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="extract-content" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.396989 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3efb79-8fb2-4fea-adda-ac014c8ea1e0" containerName="registry-server" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.397013 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f01021-e95a-43e8-a660-1a2c9cb9d8c5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.397767 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.401662 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.401845 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.401874 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.411081 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr"] Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.439772 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.499968 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.500061 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.500516 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.603973 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.604059 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.604150 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.609207 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.612524 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.619356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:01 crc kubenswrapper[4739]: I0218 14:36:01.757840 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:02 crc kubenswrapper[4739]: I0218 14:36:02.308232 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr"] Feb 18 14:36:02 crc kubenswrapper[4739]: I0218 14:36:02.348523 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerStarted","Data":"0a66691ca87594d26416873682bfd3c94b8591005eb049dcda8c1fe1ff884c24"} Feb 18 14:36:03 crc kubenswrapper[4739]: I0218 14:36:03.369436 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerStarted","Data":"09fa6ef9c8bdb5d73b629df7fbb74d95a842311149a8134f3bf5046e44ed6aed"} Feb 18 14:36:03 crc kubenswrapper[4739]: I0218 14:36:03.399193 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" podStartSLOduration=1.917995253 podStartE2EDuration="2.399175446s" podCreationTimestamp="2026-02-18 14:36:01 +0000 UTC" firstStartedPulling="2026-02-18 14:36:02.314548292 +0000 UTC m=+2194.810269214" lastFinishedPulling="2026-02-18 14:36:02.795728485 +0000 UTC m=+2195.291449407" observedRunningTime="2026-02-18 14:36:03.389724396 +0000 UTC m=+2195.885445348" watchObservedRunningTime="2026-02-18 14:36:03.399175446 +0000 UTC m=+2195.894896368" Feb 18 14:36:07 crc kubenswrapper[4739]: I0218 14:36:07.053880 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:36:07 crc kubenswrapper[4739]: I0218 14:36:07.067465 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-zq8vc"] Feb 18 14:36:08 crc kubenswrapper[4739]: I0218 14:36:08.424967 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e0a952f-ef12-46c6-8ca8-10f016b441be" path="/var/lib/kubelet/pods/6e0a952f-ef12-46c6-8ca8-10f016b441be/volumes" Feb 18 14:36:09 crc kubenswrapper[4739]: I0218 14:36:09.411792 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:09 crc kubenswrapper[4739]: E0218 14:36:09.412961 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:10 crc kubenswrapper[4739]: I0218 14:36:10.461241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:10 crc kubenswrapper[4739]: I0218 14:36:10.520696 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:36:11 crc kubenswrapper[4739]: I0218 14:36:11.506621 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dk57d" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" containerID="cri-o://db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" gracePeriod=2 Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.039265 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.094430 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") pod \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.094576 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") pod \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.094611 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") pod \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\" (UID: \"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1\") " Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.096140 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities" (OuterVolumeSpecName: "utilities") pod "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" (UID: "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.096952 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.101677 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p" (OuterVolumeSpecName: "kube-api-access-xxs7p") pod "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" (UID: "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1"). InnerVolumeSpecName "kube-api-access-xxs7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.152479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" (UID: "3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.199510 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxs7p\" (UniqueName: \"kubernetes.io/projected/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-kube-api-access-xxs7p\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.199549 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.519411 4739 generic.go:334] "Generic (PLEG): container finished" podID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerID="09fa6ef9c8bdb5d73b629df7fbb74d95a842311149a8134f3bf5046e44ed6aed" exitCode=0 Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.519496 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerDied","Data":"09fa6ef9c8bdb5d73b629df7fbb74d95a842311149a8134f3bf5046e44ed6aed"} Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.522951 4739 generic.go:334] "Generic (PLEG): container finished" podID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" exitCode=0 Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949"} Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523033 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dk57d" event={"ID":"3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1","Type":"ContainerDied","Data":"cdaa3f953e05885fe975cbdb944614d11775a19b3997b116b17e5cc3b88476ef"} Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523063 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dk57d" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.523072 4739 scope.go:117] "RemoveContainer" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.559665 4739 scope.go:117] "RemoveContainer" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.574016 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.581587 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dk57d"] Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.589378 4739 scope.go:117] "RemoveContainer" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.652239 4739 scope.go:117] "RemoveContainer" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" Feb 18 14:36:12 crc kubenswrapper[4739]: E0218 14:36:12.652725 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949\": container with ID starting with db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949 not found: ID does not exist" containerID="db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.652756 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949"} err="failed to get container status \"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949\": rpc error: code = NotFound desc = could not find container \"db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949\": container with ID starting with db18d2f70041ef022bbf3f2065145504ae27b0c77e2572db0c84c702ba76b949 not found: ID does not exist" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.652781 4739 scope.go:117] "RemoveContainer" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" Feb 18 14:36:12 crc kubenswrapper[4739]: E0218 14:36:12.653148 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e\": container with ID starting with a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e not found: ID does not exist" containerID="a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.653188 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e"} err="failed to get container status \"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e\": rpc error: code = NotFound desc = could not find container \"a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e\": container with ID starting with a563106f16064f936626aa2d457f2f22048c09fbfde32f7e729118524050980e not found: ID does not exist" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.653215 4739 scope.go:117] "RemoveContainer" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" Feb 18 14:36:12 crc kubenswrapper[4739]: E0218 14:36:12.653768 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b\": container with ID starting with 52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b not found: ID does not exist" containerID="52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b" Feb 18 14:36:12 crc kubenswrapper[4739]: I0218 14:36:12.653815 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b"} err="failed to get container status \"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b\": rpc error: code = NotFound desc = could not find container \"52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b\": container with ID starting with 52461b44c39faa723782a3c7c431b38381f55bd7c0e8904596c87d8e13a7cc7b not found: ID does not exist" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.064813 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.151046 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") pod \"c7a96416-0a9e-44f5-9200-755a99d4c38e\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.151511 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") pod \"c7a96416-0a9e-44f5-9200-755a99d4c38e\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.151682 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") pod \"c7a96416-0a9e-44f5-9200-755a99d4c38e\" (UID: \"c7a96416-0a9e-44f5-9200-755a99d4c38e\") " Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.162847 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp" (OuterVolumeSpecName: "kube-api-access-tq7fp") pod "c7a96416-0a9e-44f5-9200-755a99d4c38e" (UID: "c7a96416-0a9e-44f5-9200-755a99d4c38e"). InnerVolumeSpecName "kube-api-access-tq7fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.189249 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory" (OuterVolumeSpecName: "inventory") pod "c7a96416-0a9e-44f5-9200-755a99d4c38e" (UID: "c7a96416-0a9e-44f5-9200-755a99d4c38e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.193599 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c7a96416-0a9e-44f5-9200-755a99d4c38e" (UID: "c7a96416-0a9e-44f5-9200-755a99d4c38e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.255028 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.255360 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq7fp\" (UniqueName: \"kubernetes.io/projected/c7a96416-0a9e-44f5-9200-755a99d4c38e-kube-api-access-tq7fp\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.255561 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7a96416-0a9e-44f5-9200-755a99d4c38e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.431246 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" path="/var/lib/kubelet/pods/3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1/volumes" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.562518 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" event={"ID":"c7a96416-0a9e-44f5-9200-755a99d4c38e","Type":"ContainerDied","Data":"0a66691ca87594d26416873682bfd3c94b8591005eb049dcda8c1fe1ff884c24"} Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.562828 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a66691ca87594d26416873682bfd3c94b8591005eb049dcda8c1fe1ff884c24" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.562584 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651216 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7"] Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651868 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651894 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651916 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-utilities" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651925 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-utilities" Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651945 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651952 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" Feb 18 14:36:14 crc kubenswrapper[4739]: E0218 14:36:14.651967 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-content" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.651975 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="extract-content" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.652258 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a96416-0a9e-44f5-9200-755a99d4c38e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.652282 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3294ebfc-1c27-44e3-a94e-ef98dfd9f0f1" containerName="registry-server" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.656376 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659055 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659270 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659283 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659389 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659537 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.659635 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.660714 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.660724 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.661774 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.662514 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7"] Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.783897 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.783978 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784037 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784079 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784112 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784149 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784206 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784318 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784528 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784659 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784795 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.784962 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785046 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785117 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785305 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.785395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888419 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888506 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888548 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888592 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888629 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888763 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888824 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888858 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888886 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888911 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888942 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.888972 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889040 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889077 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889113 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.889158 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.894885 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895128 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895136 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895275 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.895835 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.896906 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.897138 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.897397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.897702 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.898368 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.898373 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.898747 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.900614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.901594 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.901650 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.910412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-klrh7\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:14 crc kubenswrapper[4739]: I0218 14:36:14.991481 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:15 crc kubenswrapper[4739]: I0218 14:36:15.646081 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7"] Feb 18 14:36:16 crc kubenswrapper[4739]: I0218 14:36:16.590671 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerStarted","Data":"df78306ba01b1d911236fd9e681dba2353f595691554d4b3fd42fed37cdd9542"} Feb 18 14:36:16 crc kubenswrapper[4739]: I0218 14:36:16.591718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerStarted","Data":"b730c2f9b6fecc1733f1c12778c0b205ba9f2320979358e1eb9d5c08b8b95993"} Feb 18 14:36:16 crc kubenswrapper[4739]: I0218 14:36:16.617236 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" podStartSLOduration=2.143170957 podStartE2EDuration="2.617216678s" podCreationTimestamp="2026-02-18 14:36:14 +0000 UTC" firstStartedPulling="2026-02-18 14:36:15.661182094 +0000 UTC m=+2208.156903016" lastFinishedPulling="2026-02-18 14:36:16.135227815 +0000 UTC m=+2208.630948737" observedRunningTime="2026-02-18 14:36:16.612437147 +0000 UTC m=+2209.108158069" watchObservedRunningTime="2026-02-18 14:36:16.617216678 +0000 UTC m=+2209.112937600" Feb 18 14:36:21 crc kubenswrapper[4739]: I0218 14:36:21.412199 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:21 crc kubenswrapper[4739]: E0218 14:36:21.412991 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:33 crc kubenswrapper[4739]: I0218 14:36:33.410464 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:33 crc kubenswrapper[4739]: E0218 14:36:33.411333 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:36 crc kubenswrapper[4739]: I0218 14:36:36.857080 4739 scope.go:117] "RemoveContainer" containerID="03775c57719ac4b92c1847bc19cfdeea48db66d3dda5aee4aca36cb4a626f862" Feb 18 14:36:48 crc kubenswrapper[4739]: I0218 14:36:48.418813 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:36:48 crc kubenswrapper[4739]: E0218 14:36:48.419692 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:36:50 crc kubenswrapper[4739]: I0218 14:36:50.054773 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:36:50 crc kubenswrapper[4739]: I0218 14:36:50.068952 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-k8bxr"] Feb 18 14:36:50 crc kubenswrapper[4739]: I0218 14:36:50.423831 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18e3b1f2-e16d-4800-90db-c4cc03f891c3" path="/var/lib/kubelet/pods/18e3b1f2-e16d-4800-90db-c4cc03f891c3/volumes" Feb 18 14:36:56 crc kubenswrapper[4739]: I0218 14:36:56.007589 4739 generic.go:334] "Generic (PLEG): container finished" podID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerID="df78306ba01b1d911236fd9e681dba2353f595691554d4b3fd42fed37cdd9542" exitCode=0 Feb 18 14:36:56 crc kubenswrapper[4739]: I0218 14:36:56.007716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerDied","Data":"df78306ba01b1d911236fd9e681dba2353f595691554d4b3fd42fed37cdd9542"} Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.499742 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606259 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606316 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606341 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606384 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606404 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606473 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606500 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606608 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606666 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606766 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606806 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606828 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606853 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606878 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606907 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.606947 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") pod \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\" (UID: \"fc5c5a16-015a-48fe-a2c1-1954543e14bd\") " Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.613632 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.615396 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.615549 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.615911 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.616285 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.617129 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.617773 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h" (OuterVolumeSpecName: "kube-api-access-wnx4h") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "kube-api-access-wnx4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.618181 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.618556 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.618724 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.621669 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.622396 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.625958 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.628794 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.660098 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.668276 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory" (OuterVolumeSpecName: "inventory") pod "fc5c5a16-015a-48fe-a2c1-1954543e14bd" (UID: "fc5c5a16-015a-48fe-a2c1-1954543e14bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.710965 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711025 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711045 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711058 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711070 4739 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711080 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711114 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711126 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711136 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711148 4739 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711159 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnx4h\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-kube-api-access-wnx4h\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711197 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711209 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711223 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711237 4739 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5c5a16-015a-48fe-a2c1-1954543e14bd-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:57 crc kubenswrapper[4739]: I0218 14:36:57.711274 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fc5c5a16-015a-48fe-a2c1-1954543e14bd-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.032647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" event={"ID":"fc5c5a16-015a-48fe-a2c1-1954543e14bd","Type":"ContainerDied","Data":"b730c2f9b6fecc1733f1c12778c0b205ba9f2320979358e1eb9d5c08b8b95993"} Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.032698 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b730c2f9b6fecc1733f1c12778c0b205ba9f2320979358e1eb9d5c08b8b95993" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.032708 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-klrh7" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.143013 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb"] Feb 18 14:36:58 crc kubenswrapper[4739]: E0218 14:36:58.143684 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.143711 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.143970 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5c5a16-015a-48fe-a2c1-1954543e14bd" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.145111 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.148558 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.148646 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.149681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.149698 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.149725 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.157596 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb"] Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222428 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222744 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222840 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.222992 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.223155 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325357 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325741 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325843 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.325967 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.326073 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.327078 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.330948 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.332193 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.344725 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.355474 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-g8rqb\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:58 crc kubenswrapper[4739]: I0218 14:36:58.479808 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:36:59 crc kubenswrapper[4739]: I0218 14:36:59.021555 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb"] Feb 18 14:36:59 crc kubenswrapper[4739]: I0218 14:36:59.052465 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerStarted","Data":"15162173142a3209858a61c984ad415f3528b65545ad8e7da191d56c81b327ca"} Feb 18 14:37:00 crc kubenswrapper[4739]: I0218 14:37:00.064661 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerStarted","Data":"40b98275f70a2aa1b100a0382e07f6946f3af143f924151e6e7d6b280736d88c"} Feb 18 14:37:00 crc kubenswrapper[4739]: I0218 14:37:00.085539 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" podStartSLOduration=1.670493838 podStartE2EDuration="2.085519448s" podCreationTimestamp="2026-02-18 14:36:58 +0000 UTC" firstStartedPulling="2026-02-18 14:36:59.025266525 +0000 UTC m=+2251.520987447" lastFinishedPulling="2026-02-18 14:36:59.440292135 +0000 UTC m=+2251.936013057" observedRunningTime="2026-02-18 14:37:00.083566118 +0000 UTC m=+2252.579287060" watchObservedRunningTime="2026-02-18 14:37:00.085519448 +0000 UTC m=+2252.581240390" Feb 18 14:37:02 crc kubenswrapper[4739]: I0218 14:37:02.411371 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:02 crc kubenswrapper[4739]: E0218 14:37:02.412246 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:16 crc kubenswrapper[4739]: I0218 14:37:16.411591 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:16 crc kubenswrapper[4739]: E0218 14:37:16.413265 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:27 crc kubenswrapper[4739]: I0218 14:37:27.410218 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:27 crc kubenswrapper[4739]: E0218 14:37:27.411141 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:36 crc kubenswrapper[4739]: I0218 14:37:36.984575 4739 scope.go:117] "RemoveContainer" containerID="ea37bd2fe6c3cde4519476c0d93705aa44f3d3921ef14e7b974cb0ef1c293843" Feb 18 14:37:39 crc kubenswrapper[4739]: I0218 14:37:39.410653 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:39 crc kubenswrapper[4739]: E0218 14:37:39.411570 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:51 crc kubenswrapper[4739]: I0218 14:37:51.411001 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:37:51 crc kubenswrapper[4739]: E0218 14:37:51.412004 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:37:54 crc kubenswrapper[4739]: I0218 14:37:54.636594 4739 generic.go:334] "Generic (PLEG): container finished" podID="c4382bff-5480-4a55-ad49-e6293729f738" containerID="40b98275f70a2aa1b100a0382e07f6946f3af143f924151e6e7d6b280736d88c" exitCode=0 Feb 18 14:37:54 crc kubenswrapper[4739]: I0218 14:37:54.636654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerDied","Data":"40b98275f70a2aa1b100a0382e07f6946f3af143f924151e6e7d6b280736d88c"} Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.109254 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250234 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250349 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250469 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250504 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.250573 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") pod \"c4382bff-5480-4a55-ad49-e6293729f738\" (UID: \"c4382bff-5480-4a55-ad49-e6293729f738\") " Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.256745 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx" (OuterVolumeSpecName: "kube-api-access-btmmx") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "kube-api-access-btmmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.256935 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.283350 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.283707 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.285306 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory" (OuterVolumeSpecName: "inventory") pod "c4382bff-5480-4a55-ad49-e6293729f738" (UID: "c4382bff-5480-4a55-ad49-e6293729f738"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353612 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btmmx\" (UniqueName: \"kubernetes.io/projected/c4382bff-5480-4a55-ad49-e6293729f738-kube-api-access-btmmx\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353641 4739 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353650 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353659 4739 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4382bff-5480-4a55-ad49-e6293729f738-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.353669 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4382bff-5480-4a55-ad49-e6293729f738-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.666017 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" event={"ID":"c4382bff-5480-4a55-ad49-e6293729f738","Type":"ContainerDied","Data":"15162173142a3209858a61c984ad415f3528b65545ad8e7da191d56c81b327ca"} Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.666060 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15162173142a3209858a61c984ad415f3528b65545ad8e7da191d56c81b327ca" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.666112 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-g8rqb" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.758626 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j"] Feb 18 14:37:56 crc kubenswrapper[4739]: E0218 14:37:56.759231 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4382bff-5480-4a55-ad49-e6293729f738" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.759256 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4382bff-5480-4a55-ad49-e6293729f738" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.759568 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4382bff-5480-4a55-ad49-e6293729f738" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.760624 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.776182 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j"] Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788037 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788256 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788313 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788338 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788478 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.788561 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867594 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867639 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867768 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867822 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867884 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.867918 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971192 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971849 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.971957 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.972080 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.972152 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.978056 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.979549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.980197 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.982794 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.984889 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:56 crc kubenswrapper[4739]: I0218 14:37:56.990751 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:57 crc kubenswrapper[4739]: I0218 14:37:57.124141 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:37:57 crc kubenswrapper[4739]: I0218 14:37:57.708507 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j"] Feb 18 14:37:58 crc kubenswrapper[4739]: I0218 14:37:58.687729 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerStarted","Data":"38ac7b0df886ec0b85771bcdb74212edfcfe5ad9d5faae601f530372298c1069"} Feb 18 14:37:59 crc kubenswrapper[4739]: I0218 14:37:59.700062 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerStarted","Data":"d59d6e2338496bc8e22311dc70f07b8202dfa292e5c94da4f37791a2d16e02ac"} Feb 18 14:37:59 crc kubenswrapper[4739]: I0218 14:37:59.728469 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" podStartSLOduration=2.216023198 podStartE2EDuration="3.728432308s" podCreationTimestamp="2026-02-18 14:37:56 +0000 UTC" firstStartedPulling="2026-02-18 14:37:57.723654978 +0000 UTC m=+2310.219375900" lastFinishedPulling="2026-02-18 14:37:59.236064088 +0000 UTC m=+2311.731785010" observedRunningTime="2026-02-18 14:37:59.717177142 +0000 UTC m=+2312.212898064" watchObservedRunningTime="2026-02-18 14:37:59.728432308 +0000 UTC m=+2312.224153230" Feb 18 14:38:03 crc kubenswrapper[4739]: I0218 14:38:03.411622 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:03 crc kubenswrapper[4739]: E0218 14:38:03.412675 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:16 crc kubenswrapper[4739]: I0218 14:38:16.417998 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:16 crc kubenswrapper[4739]: E0218 14:38:16.418704 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:27 crc kubenswrapper[4739]: I0218 14:38:27.410196 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:27 crc kubenswrapper[4739]: E0218 14:38:27.410983 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:41 crc kubenswrapper[4739]: I0218 14:38:41.411488 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:41 crc kubenswrapper[4739]: E0218 14:38:41.412535 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:38:43 crc kubenswrapper[4739]: I0218 14:38:43.189244 4739 generic.go:334] "Generic (PLEG): container finished" podID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerID="d59d6e2338496bc8e22311dc70f07b8202dfa292e5c94da4f37791a2d16e02ac" exitCode=0 Feb 18 14:38:43 crc kubenswrapper[4739]: I0218 14:38:43.189322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerDied","Data":"d59d6e2338496bc8e22311dc70f07b8202dfa292e5c94da4f37791a2d16e02ac"} Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.750645 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829152 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829482 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829542 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829569 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829636 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.829716 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") pod \"015603d5-7d09-4388-a5d1-93c25d1b6344\" (UID: \"015603d5-7d09-4388-a5d1-93c25d1b6344\") " Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.835635 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl" (OuterVolumeSpecName: "kube-api-access-xddcl") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "kube-api-access-xddcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.844479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.873018 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.873348 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.876137 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.893044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory" (OuterVolumeSpecName: "inventory") pod "015603d5-7d09-4388-a5d1-93c25d1b6344" (UID: "015603d5-7d09-4388-a5d1-93c25d1b6344"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933752 4739 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933793 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933807 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933817 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xddcl\" (UniqueName: \"kubernetes.io/projected/015603d5-7d09-4388-a5d1-93c25d1b6344-kube-api-access-xddcl\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933826 4739 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:44 crc kubenswrapper[4739]: I0218 14:38:44.933838 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/015603d5-7d09-4388-a5d1-93c25d1b6344-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.212325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" event={"ID":"015603d5-7d09-4388-a5d1-93c25d1b6344","Type":"ContainerDied","Data":"38ac7b0df886ec0b85771bcdb74212edfcfe5ad9d5faae601f530372298c1069"} Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.212374 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ac7b0df886ec0b85771bcdb74212edfcfe5ad9d5faae601f530372298c1069" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.212433 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.320468 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n"] Feb 18 14:38:45 crc kubenswrapper[4739]: E0218 14:38:45.321041 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.321064 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.321475 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="015603d5-7d09-4388-a5d1-93c25d1b6344" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.322504 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.325899 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326131 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326752 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326853 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.326998 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341767 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341847 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341890 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.341993 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.342059 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.362824 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n"] Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.444685 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.444759 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.445325 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.445495 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.445568 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.450031 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.450250 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.451243 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.451645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.468296 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-znm2n\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:45 crc kubenswrapper[4739]: I0218 14:38:45.658549 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:38:46 crc kubenswrapper[4739]: I0218 14:38:46.209241 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n"] Feb 18 14:38:46 crc kubenswrapper[4739]: I0218 14:38:46.225922 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerStarted","Data":"7b66148ae4cb6a51928b96889edb12cde3405f18efcd057d483e7ddb5cc7b7a1"} Feb 18 14:38:47 crc kubenswrapper[4739]: I0218 14:38:47.238008 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerStarted","Data":"0d816f28e3c7a56f082308e8cbb34038d3dc00b07cc36fa6c338ae226d5a44e8"} Feb 18 14:38:47 crc kubenswrapper[4739]: I0218 14:38:47.268888 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" podStartSLOduration=1.624339532 podStartE2EDuration="2.268859481s" podCreationTimestamp="2026-02-18 14:38:45 +0000 UTC" firstStartedPulling="2026-02-18 14:38:46.217314712 +0000 UTC m=+2358.713035644" lastFinishedPulling="2026-02-18 14:38:46.861834671 +0000 UTC m=+2359.357555593" observedRunningTime="2026-02-18 14:38:47.260693663 +0000 UTC m=+2359.756414605" watchObservedRunningTime="2026-02-18 14:38:47.268859481 +0000 UTC m=+2359.764580403" Feb 18 14:38:55 crc kubenswrapper[4739]: I0218 14:38:55.410502 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:38:55 crc kubenswrapper[4739]: E0218 14:38:55.411281 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:08 crc kubenswrapper[4739]: I0218 14:39:08.418306 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:08 crc kubenswrapper[4739]: E0218 14:39:08.419166 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:23 crc kubenswrapper[4739]: I0218 14:39:23.411131 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:23 crc kubenswrapper[4739]: E0218 14:39:23.412122 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:36 crc kubenswrapper[4739]: I0218 14:39:36.412686 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:36 crc kubenswrapper[4739]: E0218 14:39:36.414091 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:39:49 crc kubenswrapper[4739]: I0218 14:39:49.410682 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:39:49 crc kubenswrapper[4739]: E0218 14:39:49.411501 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:00 crc kubenswrapper[4739]: I0218 14:40:00.410514 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:00 crc kubenswrapper[4739]: E0218 14:40:00.411463 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:12 crc kubenswrapper[4739]: I0218 14:40:12.410603 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:12 crc kubenswrapper[4739]: E0218 14:40:12.411371 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:26 crc kubenswrapper[4739]: I0218 14:40:26.413349 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:26 crc kubenswrapper[4739]: E0218 14:40:26.414299 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:40:37 crc kubenswrapper[4739]: I0218 14:40:37.410951 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:40:38 crc kubenswrapper[4739]: I0218 14:40:38.290718 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd"} Feb 18 14:41:57 crc kubenswrapper[4739]: I0218 14:41:57.019061 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-668fffc447-mjpk7" podUID="ac478be7-1c16-4a7f-a2d2-618cfe76c3d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 14:42:17 crc kubenswrapper[4739]: I0218 14:42:17.312005 4739 generic.go:334] "Generic (PLEG): container finished" podID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerID="0d816f28e3c7a56f082308e8cbb34038d3dc00b07cc36fa6c338ae226d5a44e8" exitCode=0 Feb 18 14:42:17 crc kubenswrapper[4739]: I0218 14:42:17.312210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerDied","Data":"0d816f28e3c7a56f082308e8cbb34038d3dc00b07cc36fa6c338ae226d5a44e8"} Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.820565 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947178 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947273 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947368 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947488 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.947545 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") pod \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\" (UID: \"bd7dea6a-d047-4a6c-809f-395a7cf418e8\") " Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.953654 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.956548 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp" (OuterVolumeSpecName: "kube-api-access-db6tp") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "kube-api-access-db6tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.980126 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:18 crc kubenswrapper[4739]: I0218 14:42:18.982927 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.006719 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory" (OuterVolumeSpecName: "inventory") pod "bd7dea6a-d047-4a6c-809f-395a7cf418e8" (UID: "bd7dea6a-d047-4a6c-809f-395a7cf418e8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050308 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050349 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050360 4739 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050369 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd7dea6a-d047-4a6c-809f-395a7cf418e8-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.050379 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db6tp\" (UniqueName: \"kubernetes.io/projected/bd7dea6a-d047-4a6c-809f-395a7cf418e8-kube-api-access-db6tp\") on node \"crc\" DevicePath \"\"" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.336413 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.338672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-znm2n" event={"ID":"bd7dea6a-d047-4a6c-809f-395a7cf418e8","Type":"ContainerDied","Data":"7b66148ae4cb6a51928b96889edb12cde3405f18efcd057d483e7ddb5cc7b7a1"} Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.338731 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b66148ae4cb6a51928b96889edb12cde3405f18efcd057d483e7ddb5cc7b7a1" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.440282 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw"] Feb 18 14:42:19 crc kubenswrapper[4739]: E0218 14:42:19.440913 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.440933 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.441225 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7dea6a-d047-4a6c-809f-395a7cf418e8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.442356 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449424 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449546 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449668 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.449959 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.450116 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.450401 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.450590 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.460883 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw"] Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.561973 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563310 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563383 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563435 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563651 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563935 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.563979 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.564154 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.564251 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.564704 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.666882 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.666985 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667048 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667093 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667156 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667229 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667301 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667933 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.667988 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.668086 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.668172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.669623 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.671181 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.671548 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.671572 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.672180 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.672502 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.672961 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.673463 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.673954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.676186 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.690438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"nova-edpm-deployment-openstack-edpm-ipam-mwcgw\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:19 crc kubenswrapper[4739]: I0218 14:42:19.762104 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:42:20 crc kubenswrapper[4739]: W0218 14:42:20.379574 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08b26802_db14_4190_99d1_9c9c7403195b.slice/crio-5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082 WatchSource:0}: Error finding container 5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082: Status 404 returned error can't find the container with id 5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082 Feb 18 14:42:20 crc kubenswrapper[4739]: I0218 14:42:20.379623 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw"] Feb 18 14:42:20 crc kubenswrapper[4739]: I0218 14:42:20.381785 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:42:21 crc kubenswrapper[4739]: I0218 14:42:21.360302 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerStarted","Data":"221a440c53572e2fdfdf122096d71c056281c216b00bcf5699b43df7aabbf6c7"} Feb 18 14:42:21 crc kubenswrapper[4739]: I0218 14:42:21.360567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerStarted","Data":"5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082"} Feb 18 14:42:21 crc kubenswrapper[4739]: I0218 14:42:21.392078 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" podStartSLOduration=1.799046391 podStartE2EDuration="2.392059512s" podCreationTimestamp="2026-02-18 14:42:19 +0000 UTC" firstStartedPulling="2026-02-18 14:42:20.38154717 +0000 UTC m=+2572.877268082" lastFinishedPulling="2026-02-18 14:42:20.974560281 +0000 UTC m=+2573.470281203" observedRunningTime="2026-02-18 14:42:21.381526528 +0000 UTC m=+2573.877247470" watchObservedRunningTime="2026-02-18 14:42:21.392059512 +0000 UTC m=+2573.887780434" Feb 18 14:42:59 crc kubenswrapper[4739]: I0218 14:42:59.374166 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:42:59 crc kubenswrapper[4739]: I0218 14:42:59.374928 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:29 crc kubenswrapper[4739]: I0218 14:43:29.373045 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:43:29 crc kubenswrapper[4739]: I0218 14:43:29.373699 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.372491 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.372978 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.373020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.373926 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.373974 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd" gracePeriod=600 Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.667203 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd" exitCode=0 Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.667251 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd"} Feb 18 14:43:59 crc kubenswrapper[4739]: I0218 14:43:59.667508 4739 scope.go:117] "RemoveContainer" containerID="18e27fe628c0321e65a2442cbf0b5b2e2a4371d9c2b73fa327e8c31802f40934" Feb 18 14:44:00 crc kubenswrapper[4739]: I0218 14:44:00.679707 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6"} Feb 18 14:44:31 crc kubenswrapper[4739]: I0218 14:44:31.991135 4739 generic.go:334] "Generic (PLEG): container finished" podID="08b26802-db14-4190-99d1-9c9c7403195b" containerID="221a440c53572e2fdfdf122096d71c056281c216b00bcf5699b43df7aabbf6c7" exitCode=0 Feb 18 14:44:31 crc kubenswrapper[4739]: I0218 14:44:31.991234 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerDied","Data":"221a440c53572e2fdfdf122096d71c056281c216b00bcf5699b43df7aabbf6c7"} Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.629551 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.709869 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.709971 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710070 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710224 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710270 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710296 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710384 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710862 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.710932 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.711011 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.711066 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") pod \"08b26802-db14-4190-99d1-9c9c7403195b\" (UID: \"08b26802-db14-4190-99d1-9c9c7403195b\") " Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.715586 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b" (OuterVolumeSpecName: "kube-api-access-8q46b") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "kube-api-access-8q46b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.737914 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.742382 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.749087 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.751172 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.755394 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.755981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.764175 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory" (OuterVolumeSpecName: "inventory") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.782822 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.783818 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.785592 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "08b26802-db14-4190-99d1-9c9c7403195b" (UID: "08b26802-db14-4190-99d1-9c9c7403195b"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814378 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814409 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q46b\" (UniqueName: \"kubernetes.io/projected/08b26802-db14-4190-99d1-9c9c7403195b-kube-api-access-8q46b\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814418 4739 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814428 4739 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/08b26802-db14-4190-99d1-9c9c7403195b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814437 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814461 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814469 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814477 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814485 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814494 4739 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:33 crc kubenswrapper[4739]: I0218 14:44:33.814502 4739 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/08b26802-db14-4190-99d1-9c9c7403195b-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.027426 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" event={"ID":"08b26802-db14-4190-99d1-9c9c7403195b","Type":"ContainerDied","Data":"5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082"} Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.027538 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8c38001166119662f43e073dc7f0b0efaa1371f6fa091eb5a1b243351dc082" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.027605 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-mwcgw" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.111343 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x"] Feb 18 14:44:34 crc kubenswrapper[4739]: E0218 14:44:34.111933 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b26802-db14-4190-99d1-9c9c7403195b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.111956 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b26802-db14-4190-99d1-9c9c7403195b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.112207 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b26802-db14-4190-99d1-9c9c7403195b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.113561 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.120200 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.120215 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.121072 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.121264 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.121380 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.127347 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x"] Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224017 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224355 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224408 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224474 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.224907 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.225125 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.225245 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327573 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327671 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327773 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327796 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327826 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327855 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.327953 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.331844 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332102 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332225 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332230 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.332900 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.333042 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.346745 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:34 crc kubenswrapper[4739]: I0218 14:44:34.448842 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:44:35 crc kubenswrapper[4739]: I0218 14:44:35.055891 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x"] Feb 18 14:44:36 crc kubenswrapper[4739]: I0218 14:44:36.063562 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerStarted","Data":"fb0e030e4912a00d0734d07237c410d248f64fab7894be9ef716125bbc0533aa"} Feb 18 14:44:36 crc kubenswrapper[4739]: I0218 14:44:36.063621 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerStarted","Data":"fb7106cf2f98b5b393698d853885e2d731a92c39d93dbf1c2bec0a8cb53a7200"} Feb 18 14:44:36 crc kubenswrapper[4739]: I0218 14:44:36.091788 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" podStartSLOduration=1.669031812 podStartE2EDuration="2.091760547s" podCreationTimestamp="2026-02-18 14:44:34 +0000 UTC" firstStartedPulling="2026-02-18 14:44:35.060847637 +0000 UTC m=+2707.556568559" lastFinishedPulling="2026-02-18 14:44:35.483576372 +0000 UTC m=+2707.979297294" observedRunningTime="2026-02-18 14:44:36.080807671 +0000 UTC m=+2708.576528613" watchObservedRunningTime="2026-02-18 14:44:36.091760547 +0000 UTC m=+2708.587481469" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.453920 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.456887 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.474811 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.585545 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.585612 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.585820 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.688432 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.688721 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.688760 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.689384 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.689989 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.713954 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"redhat-operators-4hwc4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:47 crc kubenswrapper[4739]: I0218 14:44:47.794340 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:48 crc kubenswrapper[4739]: I0218 14:44:48.325979 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:44:49 crc kubenswrapper[4739]: I0218 14:44:49.230325 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" exitCode=0 Feb 18 14:44:49 crc kubenswrapper[4739]: I0218 14:44:49.230411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e"} Feb 18 14:44:49 crc kubenswrapper[4739]: I0218 14:44:49.230714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerStarted","Data":"d383f10ea29e5502944b4b1aab6dcf695aa1257aa566befeddf73868399abf6a"} Feb 18 14:44:50 crc kubenswrapper[4739]: I0218 14:44:50.242898 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerStarted","Data":"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc"} Feb 18 14:44:55 crc kubenswrapper[4739]: I0218 14:44:55.300889 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" exitCode=0 Feb 18 14:44:55 crc kubenswrapper[4739]: I0218 14:44:55.300992 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc"} Feb 18 14:44:56 crc kubenswrapper[4739]: I0218 14:44:56.317827 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerStarted","Data":"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424"} Feb 18 14:44:56 crc kubenswrapper[4739]: I0218 14:44:56.351046 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hwc4" podStartSLOduration=2.791429591 podStartE2EDuration="9.351022163s" podCreationTimestamp="2026-02-18 14:44:47 +0000 UTC" firstStartedPulling="2026-02-18 14:44:49.237936843 +0000 UTC m=+2721.733657765" lastFinishedPulling="2026-02-18 14:44:55.797529415 +0000 UTC m=+2728.293250337" observedRunningTime="2026-02-18 14:44:56.344632743 +0000 UTC m=+2728.840353685" watchObservedRunningTime="2026-02-18 14:44:56.351022163 +0000 UTC m=+2728.846743085" Feb 18 14:44:57 crc kubenswrapper[4739]: I0218 14:44:57.795270 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:57 crc kubenswrapper[4739]: I0218 14:44:57.795848 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:44:58 crc kubenswrapper[4739]: I0218 14:44:58.857676 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" probeResult="failure" output=< Feb 18 14:44:58 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:44:58 crc kubenswrapper[4739]: > Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.154430 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.156997 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.159893 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.165336 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.168000 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.332213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.332326 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.332569 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.434980 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.435070 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.435259 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.436314 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.441411 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.455566 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"collect-profiles-29523765-q4ltc\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:00 crc kubenswrapper[4739]: I0218 14:45:00.491296 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.027232 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 14:45:01 crc kubenswrapper[4739]: W0218 14:45:01.028749 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd759be05_a3d9_4dd0_b360_dc1f752b84be.slice/crio-f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc WatchSource:0}: Error finding container f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc: Status 404 returned error can't find the container with id f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.373748 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerStarted","Data":"aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48"} Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.374946 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerStarted","Data":"f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc"} Feb 18 14:45:01 crc kubenswrapper[4739]: I0218 14:45:01.402518 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" podStartSLOduration=1.402492329 podStartE2EDuration="1.402492329s" podCreationTimestamp="2026-02-18 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 14:45:01.391572614 +0000 UTC m=+2733.887293546" watchObservedRunningTime="2026-02-18 14:45:01.402492329 +0000 UTC m=+2733.898213251" Feb 18 14:45:02 crc kubenswrapper[4739]: I0218 14:45:02.386257 4739 generic.go:334] "Generic (PLEG): container finished" podID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerID="aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48" exitCode=0 Feb 18 14:45:02 crc kubenswrapper[4739]: I0218 14:45:02.386307 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerDied","Data":"aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48"} Feb 18 14:45:03 crc kubenswrapper[4739]: I0218 14:45:03.881946 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.043829 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") pod \"d759be05-a3d9-4dd0-b360-dc1f752b84be\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.044130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") pod \"d759be05-a3d9-4dd0-b360-dc1f752b84be\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.044425 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume" (OuterVolumeSpecName: "config-volume") pod "d759be05-a3d9-4dd0-b360-dc1f752b84be" (UID: "d759be05-a3d9-4dd0-b360-dc1f752b84be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.044590 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") pod \"d759be05-a3d9-4dd0-b360-dc1f752b84be\" (UID: \"d759be05-a3d9-4dd0-b360-dc1f752b84be\") " Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.045532 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d759be05-a3d9-4dd0-b360-dc1f752b84be-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.050268 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9" (OuterVolumeSpecName: "kube-api-access-h68t9") pod "d759be05-a3d9-4dd0-b360-dc1f752b84be" (UID: "d759be05-a3d9-4dd0-b360-dc1f752b84be"). InnerVolumeSpecName "kube-api-access-h68t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.050607 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d759be05-a3d9-4dd0-b360-dc1f752b84be" (UID: "d759be05-a3d9-4dd0-b360-dc1f752b84be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.149820 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d759be05-a3d9-4dd0-b360-dc1f752b84be-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.149859 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h68t9\" (UniqueName: \"kubernetes.io/projected/d759be05-a3d9-4dd0-b360-dc1f752b84be-kube-api-access-h68t9\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.412249 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.438151 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc" event={"ID":"d759be05-a3d9-4dd0-b360-dc1f752b84be","Type":"ContainerDied","Data":"f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc"} Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.438206 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f394ca56990878087a56c93c188aa7daa6db1aafdb65530d412d80d6490030bc" Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.491308 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:45:04 crc kubenswrapper[4739]: I0218 14:45:04.504261 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523720-vljqj"] Feb 18 14:45:06 crc kubenswrapper[4739]: I0218 14:45:06.447065 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0" path="/var/lib/kubelet/pods/f06634f8-0f0f-44f2-9a1e-9cb8d4c252f0/volumes" Feb 18 14:45:08 crc kubenswrapper[4739]: I0218 14:45:08.869110 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" probeResult="failure" output=< Feb 18 14:45:08 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:45:08 crc kubenswrapper[4739]: > Feb 18 14:45:18 crc kubenswrapper[4739]: I0218 14:45:18.844755 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" probeResult="failure" output=< Feb 18 14:45:18 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:45:18 crc kubenswrapper[4739]: > Feb 18 14:45:27 crc kubenswrapper[4739]: I0218 14:45:27.861885 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:27 crc kubenswrapper[4739]: I0218 14:45:27.921271 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:28 crc kubenswrapper[4739]: I0218 14:45:28.111148 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:45:29 crc kubenswrapper[4739]: I0218 14:45:29.677236 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4hwc4" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" containerID="cri-o://04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" gracePeriod=2 Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.346378 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.407770 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") pod \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.407845 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") pod \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.408128 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") pod \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\" (UID: \"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4\") " Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.408557 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities" (OuterVolumeSpecName: "utilities") pod "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" (UID: "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.409579 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.415750 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb" (OuterVolumeSpecName: "kube-api-access-gnjqb") pod "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" (UID: "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4"). InnerVolumeSpecName "kube-api-access-gnjqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.513093 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnjqb\" (UniqueName: \"kubernetes.io/projected/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-kube-api-access-gnjqb\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.552926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" (UID: "f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.615077 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690432 4739 generic.go:334] "Generic (PLEG): container finished" podID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" exitCode=0 Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690489 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424"} Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690517 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwc4" event={"ID":"f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4","Type":"ContainerDied","Data":"d383f10ea29e5502944b4b1aab6dcf695aa1257aa566befeddf73868399abf6a"} Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690543 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwc4" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.690543 4739 scope.go:117] "RemoveContainer" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.720035 4739 scope.go:117] "RemoveContainer" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.737580 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.752632 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4hwc4"] Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.757994 4739 scope.go:117] "RemoveContainer" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.802362 4739 scope.go:117] "RemoveContainer" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" Feb 18 14:45:30 crc kubenswrapper[4739]: E0218 14:45:30.802864 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424\": container with ID starting with 04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424 not found: ID does not exist" containerID="04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.802906 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424"} err="failed to get container status \"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424\": rpc error: code = NotFound desc = could not find container \"04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424\": container with ID starting with 04311eba4b620f0ae073e7b9e5251bb36ec0ca94faa047de48ce8233dd69c424 not found: ID does not exist" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.802934 4739 scope.go:117] "RemoveContainer" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" Feb 18 14:45:30 crc kubenswrapper[4739]: E0218 14:45:30.803322 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc\": container with ID starting with ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc not found: ID does not exist" containerID="ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.803380 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc"} err="failed to get container status \"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc\": rpc error: code = NotFound desc = could not find container \"ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc\": container with ID starting with ad84ad772d6d4c2e780e71e749d5a7f0e71b87bdf3d6ccb433cafefda33f1ebc not found: ID does not exist" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.803405 4739 scope.go:117] "RemoveContainer" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" Feb 18 14:45:30 crc kubenswrapper[4739]: E0218 14:45:30.803775 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e\": container with ID starting with e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e not found: ID does not exist" containerID="e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e" Feb 18 14:45:30 crc kubenswrapper[4739]: I0218 14:45:30.803845 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e"} err="failed to get container status \"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e\": rpc error: code = NotFound desc = could not find container \"e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e\": container with ID starting with e81c3a517b570044aa39a5fd00c0de609a7e807294917de5c3acdaf4a632271e not found: ID does not exist" Feb 18 14:45:32 crc kubenswrapper[4739]: I0218 14:45:32.423354 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" path="/var/lib/kubelet/pods/f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4/volumes" Feb 18 14:45:37 crc kubenswrapper[4739]: I0218 14:45:37.240033 4739 scope.go:117] "RemoveContainer" containerID="74c7bbe24b159d4bcf411cc4b8b9d30acdb5e3c7b45e81fb2a3d542d4b3390c4" Feb 18 14:45:59 crc kubenswrapper[4739]: I0218 14:45:59.372877 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:45:59 crc kubenswrapper[4739]: I0218 14:45:59.373554 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.849498 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.850959 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerName="collect-profiles" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.850978 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerName="collect-profiles" Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.850998 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-utilities" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851006 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-utilities" Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.851027 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-content" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851036 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="extract-content" Feb 18 14:46:07 crc kubenswrapper[4739]: E0218 14:46:07.851056 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851065 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851387 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" containerName="collect-profiles" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.851419 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5dc9bee-ad05-43ea-8b9d-8aa6fc3403f4" containerName="registry-server" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.853779 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.864099 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.882930 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.883038 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.883080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985104 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985200 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985424 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985824 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:07 crc kubenswrapper[4739]: I0218 14:46:07.985879 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:08 crc kubenswrapper[4739]: I0218 14:46:08.028934 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"community-operators-bq9l2\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:08 crc kubenswrapper[4739]: I0218 14:46:08.192514 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:08 crc kubenswrapper[4739]: I0218 14:46:08.760541 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:08 crc kubenswrapper[4739]: W0218 14:46:08.764350 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb797d4b2_d333_4327_b9e7_f4eeec12ae1d.slice/crio-fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106 WatchSource:0}: Error finding container fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106: Status 404 returned error can't find the container with id fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106 Feb 18 14:46:09 crc kubenswrapper[4739]: I0218 14:46:09.127475 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3"} Feb 18 14:46:09 crc kubenswrapper[4739]: I0218 14:46:09.127726 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106"} Feb 18 14:46:10 crc kubenswrapper[4739]: I0218 14:46:10.139346 4739 generic.go:334] "Generic (PLEG): container finished" podID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" exitCode=0 Feb 18 14:46:10 crc kubenswrapper[4739]: I0218 14:46:10.139611 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3"} Feb 18 14:46:11 crc kubenswrapper[4739]: I0218 14:46:11.156180 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0"} Feb 18 14:46:13 crc kubenswrapper[4739]: I0218 14:46:13.180888 4739 generic.go:334] "Generic (PLEG): container finished" podID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" exitCode=0 Feb 18 14:46:13 crc kubenswrapper[4739]: I0218 14:46:13.180986 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0"} Feb 18 14:46:14 crc kubenswrapper[4739]: I0218 14:46:14.193287 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerStarted","Data":"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c"} Feb 18 14:46:14 crc kubenswrapper[4739]: I0218 14:46:14.220982 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bq9l2" podStartSLOduration=3.640985866 podStartE2EDuration="7.220960385s" podCreationTimestamp="2026-02-18 14:46:07 +0000 UTC" firstStartedPulling="2026-02-18 14:46:10.141661344 +0000 UTC m=+2802.637382266" lastFinishedPulling="2026-02-18 14:46:13.721635863 +0000 UTC m=+2806.217356785" observedRunningTime="2026-02-18 14:46:14.21233197 +0000 UTC m=+2806.708052892" watchObservedRunningTime="2026-02-18 14:46:14.220960385 +0000 UTC m=+2806.716681307" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.193361 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.193759 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.247541 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.311289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:18 crc kubenswrapper[4739]: I0218 14:46:18.499870 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.254271 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bq9l2" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" containerID="cri-o://124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" gracePeriod=2 Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.792858 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.936411 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") pod \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.936615 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") pod \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.936792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") pod \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\" (UID: \"b797d4b2-d333-4327-b9e7-f4eeec12ae1d\") " Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.937759 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities" (OuterVolumeSpecName: "utilities") pod "b797d4b2-d333-4327-b9e7-f4eeec12ae1d" (UID: "b797d4b2-d333-4327-b9e7-f4eeec12ae1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.938878 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:20 crc kubenswrapper[4739]: I0218 14:46:20.951863 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49" (OuterVolumeSpecName: "kube-api-access-5hh49") pod "b797d4b2-d333-4327-b9e7-f4eeec12ae1d" (UID: "b797d4b2-d333-4327-b9e7-f4eeec12ae1d"). InnerVolumeSpecName "kube-api-access-5hh49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.001479 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b797d4b2-d333-4327-b9e7-f4eeec12ae1d" (UID: "b797d4b2-d333-4327-b9e7-f4eeec12ae1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.040680 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.040711 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hh49\" (UniqueName: \"kubernetes.io/projected/b797d4b2-d333-4327-b9e7-f4eeec12ae1d-kube-api-access-5hh49\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271658 4739 generic.go:334] "Generic (PLEG): container finished" podID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" exitCode=0 Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271705 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c"} Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271733 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9l2" event={"ID":"b797d4b2-d333-4327-b9e7-f4eeec12ae1d","Type":"ContainerDied","Data":"fc0c393b5895cde4aa020787ab4efbc0f07df369f4c5bd0736d517f1681be106"} Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271749 4739 scope.go:117] "RemoveContainer" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.271746 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9l2" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.299613 4739 scope.go:117] "RemoveContainer" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.306691 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.315836 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bq9l2"] Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.325648 4739 scope.go:117] "RemoveContainer" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.383232 4739 scope.go:117] "RemoveContainer" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" Feb 18 14:46:21 crc kubenswrapper[4739]: E0218 14:46:21.383672 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c\": container with ID starting with 124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c not found: ID does not exist" containerID="124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.383712 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c"} err="failed to get container status \"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c\": rpc error: code = NotFound desc = could not find container \"124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c\": container with ID starting with 124216e644ef5e83a98e845330d0aa9f90fd46aabb68a77862adfbd8f057047c not found: ID does not exist" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.383742 4739 scope.go:117] "RemoveContainer" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" Feb 18 14:46:21 crc kubenswrapper[4739]: E0218 14:46:21.384102 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0\": container with ID starting with 553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0 not found: ID does not exist" containerID="553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.384123 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0"} err="failed to get container status \"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0\": rpc error: code = NotFound desc = could not find container \"553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0\": container with ID starting with 553d36838047d8ac9bfcc172b7dd300b4c47496f861039f63c10130fe01decd0 not found: ID does not exist" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.384138 4739 scope.go:117] "RemoveContainer" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" Feb 18 14:46:21 crc kubenswrapper[4739]: E0218 14:46:21.384475 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3\": container with ID starting with e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3 not found: ID does not exist" containerID="e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3" Feb 18 14:46:21 crc kubenswrapper[4739]: I0218 14:46:21.384496 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3"} err="failed to get container status \"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3\": rpc error: code = NotFound desc = could not find container \"e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3\": container with ID starting with e3588b2c4acef378ad8bf49336e55713004df6d4ab84776c2ebe24ffc6aaf6d3 not found: ID does not exist" Feb 18 14:46:22 crc kubenswrapper[4739]: I0218 14:46:22.424478 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" path="/var/lib/kubelet/pods/b797d4b2-d333-4327-b9e7-f4eeec12ae1d/volumes" Feb 18 14:46:29 crc kubenswrapper[4739]: I0218 14:46:29.372732 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:46:29 crc kubenswrapper[4739]: I0218 14:46:29.373424 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.552849 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:39 crc kubenswrapper[4739]: E0218 14:46:39.553826 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-content" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.553840 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-content" Feb 18 14:46:39 crc kubenswrapper[4739]: E0218 14:46:39.553876 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-utilities" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.553884 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="extract-utilities" Feb 18 14:46:39 crc kubenswrapper[4739]: E0218 14:46:39.553919 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.553925 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.554126 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="b797d4b2-d333-4327-b9e7-f4eeec12ae1d" containerName="registry-server" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.556065 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.577518 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.713751 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.713813 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.714012 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.816537 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.816620 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.816693 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.817335 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.817333 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.841887 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"redhat-marketplace-74fzf\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:39 crc kubenswrapper[4739]: I0218 14:46:39.875066 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:40 crc kubenswrapper[4739]: W0218 14:46:40.416805 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod921cb713_1271_40ce_a50a_3444603bbb32.slice/crio-c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93 WatchSource:0}: Error finding container c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93: Status 404 returned error can't find the container with id c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93 Feb 18 14:46:40 crc kubenswrapper[4739]: I0218 14:46:40.425238 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:40 crc kubenswrapper[4739]: I0218 14:46:40.547103 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerStarted","Data":"c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93"} Feb 18 14:46:41 crc kubenswrapper[4739]: I0218 14:46:41.557351 4739 generic.go:334] "Generic (PLEG): container finished" podID="921cb713-1271-40ce-a50a-3444603bbb32" containerID="cfbc3e7c21bfcf7fb2a0300c6cf86f59e46e81146798cac3c4ec7c2a91e995bf" exitCode=0 Feb 18 14:46:41 crc kubenswrapper[4739]: I0218 14:46:41.557401 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"cfbc3e7c21bfcf7fb2a0300c6cf86f59e46e81146798cac3c4ec7c2a91e995bf"} Feb 18 14:46:42 crc kubenswrapper[4739]: I0218 14:46:42.570164 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerStarted","Data":"694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb"} Feb 18 14:46:43 crc kubenswrapper[4739]: I0218 14:46:43.590567 4739 generic.go:334] "Generic (PLEG): container finished" podID="921cb713-1271-40ce-a50a-3444603bbb32" containerID="694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb" exitCode=0 Feb 18 14:46:43 crc kubenswrapper[4739]: I0218 14:46:43.590677 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb"} Feb 18 14:46:44 crc kubenswrapper[4739]: I0218 14:46:44.605868 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerStarted","Data":"1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a"} Feb 18 14:46:44 crc kubenswrapper[4739]: I0218 14:46:44.627196 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-74fzf" podStartSLOduration=3.160522045 podStartE2EDuration="5.62718003s" podCreationTimestamp="2026-02-18 14:46:39 +0000 UTC" firstStartedPulling="2026-02-18 14:46:41.559364033 +0000 UTC m=+2834.055084955" lastFinishedPulling="2026-02-18 14:46:44.026022018 +0000 UTC m=+2836.521742940" observedRunningTime="2026-02-18 14:46:44.624595845 +0000 UTC m=+2837.120316787" watchObservedRunningTime="2026-02-18 14:46:44.62718003 +0000 UTC m=+2837.122900952" Feb 18 14:46:48 crc kubenswrapper[4739]: I0218 14:46:48.649804 4739 generic.go:334] "Generic (PLEG): container finished" podID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerID="fb0e030e4912a00d0734d07237c410d248f64fab7894be9ef716125bbc0533aa" exitCode=0 Feb 18 14:46:48 crc kubenswrapper[4739]: I0218 14:46:48.649886 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerDied","Data":"fb0e030e4912a00d0734d07237c410d248f64fab7894be9ef716125bbc0533aa"} Feb 18 14:46:49 crc kubenswrapper[4739]: I0218 14:46:49.876390 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:49 crc kubenswrapper[4739]: I0218 14:46:49.876918 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:49 crc kubenswrapper[4739]: I0218 14:46:49.935060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.160645 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.179878 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180108 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180275 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180372 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180426 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180498 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.180584 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") pod \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\" (UID: \"aa0510e7-f2a3-4466-b797-dab2e7ec0218\") " Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.228987 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6" (OuterVolumeSpecName: "kube-api-access-zw5d6") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "kube-api-access-zw5d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.229044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.240800 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.241757 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.244217 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory" (OuterVolumeSpecName: "inventory") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.249503 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.266494 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "aa0510e7-f2a3-4466-b797-dab2e7ec0218" (UID: "aa0510e7-f2a3-4466-b797-dab2e7ec0218"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283477 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283522 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283538 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw5d6\" (UniqueName: \"kubernetes.io/projected/aa0510e7-f2a3-4466-b797-dab2e7ec0218-kube-api-access-zw5d6\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283552 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283565 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283578 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.283589 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa0510e7-f2a3-4466-b797-dab2e7ec0218-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.674325 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" event={"ID":"aa0510e7-f2a3-4466-b797-dab2e7ec0218","Type":"ContainerDied","Data":"fb7106cf2f98b5b393698d853885e2d731a92c39d93dbf1c2bec0a8cb53a7200"} Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.674724 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7106cf2f98b5b393698d853885e2d731a92c39d93dbf1c2bec0a8cb53a7200" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.674364 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.759137 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.790998 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8"] Feb 18 14:46:50 crc kubenswrapper[4739]: E0218 14:46:50.791994 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.792030 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.792467 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0510e7-f2a3-4466-b797-dab2e7ec0218" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.793803 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.795810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796430 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796529 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796731 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796800 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.796927 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.797134 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.799474 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.799681 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.799782 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.800059 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.800296 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.832702 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8"] Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.856861 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.900914 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901084 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901294 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901348 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901516 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901552 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.901637 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.907307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.907307 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.907427 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.908024 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.908433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.910315 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:50 crc kubenswrapper[4739]: I0218 14:46:50.921997 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:51 crc kubenswrapper[4739]: I0218 14:46:51.124961 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:46:51 crc kubenswrapper[4739]: I0218 14:46:51.676740 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8"] Feb 18 14:46:52 crc kubenswrapper[4739]: I0218 14:46:52.697764 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerStarted","Data":"657b540d985dd47d680ec53848eb97dfe6752bd7526553565353bdcea431e799"} Feb 18 14:46:52 crc kubenswrapper[4739]: I0218 14:46:52.697832 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-74fzf" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" containerID="cri-o://1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a" gracePeriod=2 Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.712958 4739 generic.go:334] "Generic (PLEG): container finished" podID="921cb713-1271-40ce-a50a-3444603bbb32" containerID="1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a" exitCode=0 Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.713033 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a"} Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.716083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerStarted","Data":"96da667efa594cf4dd420d385e2d89a921c807275a7e9b6d1e7d7700d6fb0c1c"} Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.738267 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" podStartSLOduration=2.910134845 podStartE2EDuration="3.738246837s" podCreationTimestamp="2026-02-18 14:46:50 +0000 UTC" firstStartedPulling="2026-02-18 14:46:51.689270609 +0000 UTC m=+2844.184991532" lastFinishedPulling="2026-02-18 14:46:52.517382602 +0000 UTC m=+2845.013103524" observedRunningTime="2026-02-18 14:46:53.730396582 +0000 UTC m=+2846.226117504" watchObservedRunningTime="2026-02-18 14:46:53.738246837 +0000 UTC m=+2846.233967759" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.881554 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.991186 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") pod \"921cb713-1271-40ce-a50a-3444603bbb32\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.991827 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") pod \"921cb713-1271-40ce-a50a-3444603bbb32\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.991995 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") pod \"921cb713-1271-40ce-a50a-3444603bbb32\" (UID: \"921cb713-1271-40ce-a50a-3444603bbb32\") " Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.993070 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities" (OuterVolumeSpecName: "utilities") pod "921cb713-1271-40ce-a50a-3444603bbb32" (UID: "921cb713-1271-40ce-a50a-3444603bbb32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.993429 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:53 crc kubenswrapper[4739]: I0218 14:46:53.997222 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2" (OuterVolumeSpecName: "kube-api-access-gdkm2") pod "921cb713-1271-40ce-a50a-3444603bbb32" (UID: "921cb713-1271-40ce-a50a-3444603bbb32"). InnerVolumeSpecName "kube-api-access-gdkm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.023490 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "921cb713-1271-40ce-a50a-3444603bbb32" (UID: "921cb713-1271-40ce-a50a-3444603bbb32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.095645 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdkm2\" (UniqueName: \"kubernetes.io/projected/921cb713-1271-40ce-a50a-3444603bbb32-kube-api-access-gdkm2\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.095690 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/921cb713-1271-40ce-a50a-3444603bbb32-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.729790 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-74fzf" event={"ID":"921cb713-1271-40ce-a50a-3444603bbb32","Type":"ContainerDied","Data":"c9bfd091b1236c4f08f03a532bdb9f6bd6df1ad907fddfcfbd523f44d2a32d93"} Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.729853 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-74fzf" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.731202 4739 scope.go:117] "RemoveContainer" containerID="1c2a851580a2605411e69647eb34e2ecb88f56a555327bc3d05c5a969653541a" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.763780 4739 scope.go:117] "RemoveContainer" containerID="694011a4646803a99b218b06f5960e865bed9a664de92248d6d5d411626a40bb" Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.766266 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.778746 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-74fzf"] Feb 18 14:46:54 crc kubenswrapper[4739]: I0218 14:46:54.788304 4739 scope.go:117] "RemoveContainer" containerID="cfbc3e7c21bfcf7fb2a0300c6cf86f59e46e81146798cac3c4ec7c2a91e995bf" Feb 18 14:46:56 crc kubenswrapper[4739]: I0218 14:46:56.428660 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921cb713-1271-40ce-a50a-3444603bbb32" path="/var/lib/kubelet/pods/921cb713-1271-40ce-a50a-3444603bbb32/volumes" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.372407 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.373012 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.373064 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.374588 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.374662 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" gracePeriod=600 Feb 18 14:46:59 crc kubenswrapper[4739]: E0218 14:46:59.495913 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.787383 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" exitCode=0 Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.787473 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6"} Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.787799 4739 scope.go:117] "RemoveContainer" containerID="9e17d18af713eac811526fbaaad6d57477c17ffe08200b05230d0655ecc291fd" Feb 18 14:46:59 crc kubenswrapper[4739]: I0218 14:46:59.788766 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:46:59 crc kubenswrapper[4739]: E0218 14:46:59.789048 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.266389 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:09 crc kubenswrapper[4739]: E0218 14:47:09.267537 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-utilities" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267559 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-utilities" Feb 18 14:47:09 crc kubenswrapper[4739]: E0218 14:47:09.267576 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267585 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" Feb 18 14:47:09 crc kubenswrapper[4739]: E0218 14:47:09.267603 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-content" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267610 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="extract-content" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.267840 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="921cb713-1271-40ce-a50a-3444603bbb32" containerName="registry-server" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.274300 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.283104 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.446523 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.447198 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.447482 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.549732 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.550191 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.550318 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.550350 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.555549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.578195 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"certified-operators-g4dt7\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:09 crc kubenswrapper[4739]: I0218 14:47:09.605932 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.255557 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.760435 4739 generic.go:334] "Generic (PLEG): container finished" podID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" exitCode=0 Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.760522 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585"} Feb 18 14:47:10 crc kubenswrapper[4739]: I0218 14:47:10.760567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerStarted","Data":"dfb7ab8d99e11afa8c16b02a02b45beccad5fc8fd5bcfc3c1c5972ab28167d30"} Feb 18 14:47:11 crc kubenswrapper[4739]: I0218 14:47:11.777138 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerStarted","Data":"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0"} Feb 18 14:47:14 crc kubenswrapper[4739]: I0218 14:47:14.412307 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:14 crc kubenswrapper[4739]: E0218 14:47:14.412969 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:14 crc kubenswrapper[4739]: I0218 14:47:14.817275 4739 generic.go:334] "Generic (PLEG): container finished" podID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" exitCode=0 Feb 18 14:47:14 crc kubenswrapper[4739]: I0218 14:47:14.817327 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0"} Feb 18 14:47:15 crc kubenswrapper[4739]: I0218 14:47:15.832918 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerStarted","Data":"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b"} Feb 18 14:47:15 crc kubenswrapper[4739]: I0218 14:47:15.856866 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4dt7" podStartSLOduration=2.411639476 podStartE2EDuration="6.856849052s" podCreationTimestamp="2026-02-18 14:47:09 +0000 UTC" firstStartedPulling="2026-02-18 14:47:10.763985694 +0000 UTC m=+2863.259706616" lastFinishedPulling="2026-02-18 14:47:15.20919527 +0000 UTC m=+2867.704916192" observedRunningTime="2026-02-18 14:47:15.853475997 +0000 UTC m=+2868.349196929" watchObservedRunningTime="2026-02-18 14:47:15.856849052 +0000 UTC m=+2868.352569994" Feb 18 14:47:19 crc kubenswrapper[4739]: I0218 14:47:19.606993 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:19 crc kubenswrapper[4739]: I0218 14:47:19.607581 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:19 crc kubenswrapper[4739]: I0218 14:47:19.663241 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:25 crc kubenswrapper[4739]: I0218 14:47:25.411069 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:25 crc kubenswrapper[4739]: E0218 14:47:25.411978 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:29 crc kubenswrapper[4739]: I0218 14:47:29.663781 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:29 crc kubenswrapper[4739]: I0218 14:47:29.725688 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:29 crc kubenswrapper[4739]: E0218 14:47:29.878589 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 18 14:47:29 crc kubenswrapper[4739]: I0218 14:47:29.971304 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4dt7" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" containerID="cri-o://758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" gracePeriod=2 Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.494307 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.594307 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") pod \"31ef9789-e0a5-4ed0-a546-641aac5b15df\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.594905 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") pod \"31ef9789-e0a5-4ed0-a546-641aac5b15df\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.595218 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities" (OuterVolumeSpecName: "utilities") pod "31ef9789-e0a5-4ed0-a546-641aac5b15df" (UID: "31ef9789-e0a5-4ed0-a546-641aac5b15df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.596217 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") pod \"31ef9789-e0a5-4ed0-a546-641aac5b15df\" (UID: \"31ef9789-e0a5-4ed0-a546-641aac5b15df\") " Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.597590 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.602123 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl" (OuterVolumeSpecName: "kube-api-access-vbpbl") pod "31ef9789-e0a5-4ed0-a546-641aac5b15df" (UID: "31ef9789-e0a5-4ed0-a546-641aac5b15df"). InnerVolumeSpecName "kube-api-access-vbpbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.661779 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31ef9789-e0a5-4ed0-a546-641aac5b15df" (UID: "31ef9789-e0a5-4ed0-a546-641aac5b15df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.700018 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ef9789-e0a5-4ed0-a546-641aac5b15df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.700050 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbpbl\" (UniqueName: \"kubernetes.io/projected/31ef9789-e0a5-4ed0-a546-641aac5b15df-kube-api-access-vbpbl\") on node \"crc\" DevicePath \"\"" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983425 4739 generic.go:334] "Generic (PLEG): container finished" podID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" exitCode=0 Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983490 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b"} Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983515 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4dt7" event={"ID":"31ef9789-e0a5-4ed0-a546-641aac5b15df","Type":"ContainerDied","Data":"dfb7ab8d99e11afa8c16b02a02b45beccad5fc8fd5bcfc3c1c5972ab28167d30"} Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983544 4739 scope.go:117] "RemoveContainer" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" Feb 18 14:47:30 crc kubenswrapper[4739]: I0218 14:47:30.983709 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4dt7" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.010107 4739 scope.go:117] "RemoveContainer" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.030978 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.041317 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4dt7"] Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.048934 4739 scope.go:117] "RemoveContainer" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.098398 4739 scope.go:117] "RemoveContainer" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" Feb 18 14:47:31 crc kubenswrapper[4739]: E0218 14:47:31.098935 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b\": container with ID starting with 758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b not found: ID does not exist" containerID="758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.098976 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b"} err="failed to get container status \"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b\": rpc error: code = NotFound desc = could not find container \"758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b\": container with ID starting with 758aa2dd1f663bd75747dabf996ff041e3922fcc1a6ec500df7aa56b4bde248b not found: ID does not exist" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099001 4739 scope.go:117] "RemoveContainer" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" Feb 18 14:47:31 crc kubenswrapper[4739]: E0218 14:47:31.099271 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0\": container with ID starting with 7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0 not found: ID does not exist" containerID="7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099297 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0"} err="failed to get container status \"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0\": rpc error: code = NotFound desc = could not find container \"7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0\": container with ID starting with 7d2514bba9c1f8ef524044a2770a624d343e8bd5e5255449e568def79df3e2a0 not found: ID does not exist" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099315 4739 scope.go:117] "RemoveContainer" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" Feb 18 14:47:31 crc kubenswrapper[4739]: E0218 14:47:31.099592 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585\": container with ID starting with 3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585 not found: ID does not exist" containerID="3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585" Feb 18 14:47:31 crc kubenswrapper[4739]: I0218 14:47:31.099619 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585"} err="failed to get container status \"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585\": rpc error: code = NotFound desc = could not find container \"3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585\": container with ID starting with 3a245386b88ebf5b9d4439e36401a5c6323037db3c63622a574b5041b8443585 not found: ID does not exist" Feb 18 14:47:32 crc kubenswrapper[4739]: I0218 14:47:32.425348 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" path="/var/lib/kubelet/pods/31ef9789-e0a5-4ed0-a546-641aac5b15df/volumes" Feb 18 14:47:39 crc kubenswrapper[4739]: I0218 14:47:39.411167 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:39 crc kubenswrapper[4739]: E0218 14:47:39.412025 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:47:53 crc kubenswrapper[4739]: I0218 14:47:53.411615 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:47:53 crc kubenswrapper[4739]: E0218 14:47:53.412831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:06 crc kubenswrapper[4739]: I0218 14:48:06.410996 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:06 crc kubenswrapper[4739]: E0218 14:48:06.411965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:19 crc kubenswrapper[4739]: I0218 14:48:19.410775 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:19 crc kubenswrapper[4739]: E0218 14:48:19.411660 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:33 crc kubenswrapper[4739]: I0218 14:48:33.414940 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:33 crc kubenswrapper[4739]: E0218 14:48:33.416006 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:36 crc kubenswrapper[4739]: I0218 14:48:36.691647 4739 generic.go:334] "Generic (PLEG): container finished" podID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerID="96da667efa594cf4dd420d385e2d89a921c807275a7e9b6d1e7d7700d6fb0c1c" exitCode=0 Feb 18 14:48:36 crc kubenswrapper[4739]: I0218 14:48:36.691782 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerDied","Data":"96da667efa594cf4dd420d385e2d89a921c807275a7e9b6d1e7d7700d6fb0c1c"} Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.196511 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.316717 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.316876 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.316975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317083 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317130 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317169 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.317257 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") pod \"76808ec1-db9d-494f-9d72-88b2bc28befb\" (UID: \"76808ec1-db9d-494f-9d72-88b2bc28befb\") " Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.326392 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.331004 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v" (OuterVolumeSpecName: "kube-api-access-mzt8v") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "kube-api-access-mzt8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.354482 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.357288 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.359815 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.373968 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.374917 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory" (OuterVolumeSpecName: "inventory") pod "76808ec1-db9d-494f-9d72-88b2bc28befb" (UID: "76808ec1-db9d-494f-9d72-88b2bc28befb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422305 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422350 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422366 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422380 4739 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422396 4739 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422409 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76808ec1-db9d-494f-9d72-88b2bc28befb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.422420 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzt8v\" (UniqueName: \"kubernetes.io/projected/76808ec1-db9d-494f-9d72-88b2bc28befb-kube-api-access-mzt8v\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.717543 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" event={"ID":"76808ec1-db9d-494f-9d72-88b2bc28befb","Type":"ContainerDied","Data":"657b540d985dd47d680ec53848eb97dfe6752bd7526553565353bdcea431e799"} Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.717858 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657b540d985dd47d680ec53848eb97dfe6752bd7526553565353bdcea431e799" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.717597 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815057 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf"] Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815679 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-utilities" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815701 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-utilities" Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815733 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-content" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815742 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="extract-content" Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815766 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815775 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" Feb 18 14:48:38 crc kubenswrapper[4739]: E0218 14:48:38.815796 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.815807 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.816095 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ef9789-e0a5-4ed0-a546-641aac5b15df" containerName="registry-server" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.816130 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="76808ec1-db9d-494f-9d72-88b2bc28befb" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.817179 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.819334 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.819746 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.821046 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.821303 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-f4qhn" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.821429 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.829099 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf"] Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.935784 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936053 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936108 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936193 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:38 crc kubenswrapper[4739]: I0218 14:48:38.936258 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039793 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039850 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039909 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.039931 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.040034 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.045762 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.046085 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.046949 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.047936 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.065924 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"logging-edpm-deployment-openstack-edpm-ipam-nsjkf\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.141905 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.719101 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf"] Feb 18 14:48:39 crc kubenswrapper[4739]: I0218 14:48:39.726533 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:48:40 crc kubenswrapper[4739]: I0218 14:48:40.739751 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerStarted","Data":"bfc6fb2c7af4721cda20daa88a5f20a73e3cf5c1320015a4258367e4bd4b50ed"} Feb 18 14:48:40 crc kubenswrapper[4739]: I0218 14:48:40.740074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerStarted","Data":"baeca6ca604c58789f14edfa8b4d71ad6a8b5f57cc825549b3663ebddc48966e"} Feb 18 14:48:40 crc kubenswrapper[4739]: I0218 14:48:40.762136 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" podStartSLOduration=2.173603483 podStartE2EDuration="2.762113348s" podCreationTimestamp="2026-02-18 14:48:38 +0000 UTC" firstStartedPulling="2026-02-18 14:48:39.726254231 +0000 UTC m=+2952.221975153" lastFinishedPulling="2026-02-18 14:48:40.314764096 +0000 UTC m=+2952.810485018" observedRunningTime="2026-02-18 14:48:40.75932408 +0000 UTC m=+2953.255045012" watchObservedRunningTime="2026-02-18 14:48:40.762113348 +0000 UTC m=+2953.257834290" Feb 18 14:48:47 crc kubenswrapper[4739]: I0218 14:48:47.411413 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:47 crc kubenswrapper[4739]: E0218 14:48:47.412338 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:48:54 crc kubenswrapper[4739]: I0218 14:48:54.922060 4739 generic.go:334] "Generic (PLEG): container finished" podID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerID="bfc6fb2c7af4721cda20daa88a5f20a73e3cf5c1320015a4258367e4bd4b50ed" exitCode=0 Feb 18 14:48:54 crc kubenswrapper[4739]: I0218 14:48:54.922165 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerDied","Data":"bfc6fb2c7af4721cda20daa88a5f20a73e3cf5c1320015a4258367e4bd4b50ed"} Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.447707 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.650564 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.650950 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.651092 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.651165 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.651255 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") pod \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\" (UID: \"61bf8a46-92c1-4b2e-9b8c-8206c618b98a\") " Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.656486 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c" (OuterVolumeSpecName: "kube-api-access-wn74c") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "kube-api-access-wn74c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.680958 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory" (OuterVolumeSpecName: "inventory") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.682667 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.687596 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.697562 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "61bf8a46-92c1-4b2e-9b8c-8206c618b98a" (UID: "61bf8a46-92c1-4b2e-9b8c-8206c618b98a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.755979 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756029 4739 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756046 4739 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756059 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn74c\" (UniqueName: \"kubernetes.io/projected/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-kube-api-access-wn74c\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.756597 4739 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/61bf8a46-92c1-4b2e-9b8c-8206c618b98a-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.948970 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" event={"ID":"61bf8a46-92c1-4b2e-9b8c-8206c618b98a","Type":"ContainerDied","Data":"baeca6ca604c58789f14edfa8b4d71ad6a8b5f57cc825549b3663ebddc48966e"} Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.949049 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baeca6ca604c58789f14edfa8b4d71ad6a8b5f57cc825549b3663ebddc48966e" Feb 18 14:48:56 crc kubenswrapper[4739]: I0218 14:48:56.949051 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-nsjkf" Feb 18 14:48:58 crc kubenswrapper[4739]: I0218 14:48:58.410545 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:48:58 crc kubenswrapper[4739]: E0218 14:48:58.411229 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:09 crc kubenswrapper[4739]: I0218 14:49:09.411309 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:09 crc kubenswrapper[4739]: E0218 14:49:09.412513 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:23 crc kubenswrapper[4739]: I0218 14:49:23.410287 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:23 crc kubenswrapper[4739]: E0218 14:49:23.411233 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:35 crc kubenswrapper[4739]: I0218 14:49:35.411212 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:35 crc kubenswrapper[4739]: E0218 14:49:35.411924 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:49:46 crc kubenswrapper[4739]: I0218 14:49:46.411043 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:49:46 crc kubenswrapper[4739]: E0218 14:49:46.411861 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:00 crc kubenswrapper[4739]: I0218 14:50:00.419330 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:00 crc kubenswrapper[4739]: E0218 14:50:00.420795 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:13 crc kubenswrapper[4739]: I0218 14:50:13.413227 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:13 crc kubenswrapper[4739]: E0218 14:50:13.416161 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:25 crc kubenswrapper[4739]: I0218 14:50:25.412810 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:25 crc kubenswrapper[4739]: E0218 14:50:25.413980 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:40 crc kubenswrapper[4739]: I0218 14:50:40.411486 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:40 crc kubenswrapper[4739]: E0218 14:50:40.412317 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:50:52 crc kubenswrapper[4739]: I0218 14:50:52.411414 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:50:52 crc kubenswrapper[4739]: E0218 14:50:52.412316 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:05 crc kubenswrapper[4739]: I0218 14:51:05.411202 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:05 crc kubenswrapper[4739]: E0218 14:51:05.412051 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:16 crc kubenswrapper[4739]: I0218 14:51:16.411177 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:16 crc kubenswrapper[4739]: E0218 14:51:16.411897 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:31 crc kubenswrapper[4739]: I0218 14:51:31.410987 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:31 crc kubenswrapper[4739]: E0218 14:51:31.411831 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:46 crc kubenswrapper[4739]: I0218 14:51:46.411587 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:46 crc kubenswrapper[4739]: E0218 14:51:46.412596 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:51:59 crc kubenswrapper[4739]: I0218 14:51:59.411142 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:51:59 crc kubenswrapper[4739]: I0218 14:51:59.941699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd"} Feb 18 14:53:59 crc kubenswrapper[4739]: I0218 14:53:59.372275 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:53:59 crc kubenswrapper[4739]: I0218 14:53:59.372904 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:54:29 crc kubenswrapper[4739]: I0218 14:54:29.372882 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:54:29 crc kubenswrapper[4739]: I0218 14:54:29.373431 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.372371 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.372932 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.372985 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.373857 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.373923 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd" gracePeriod=600 Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879196 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd" exitCode=0 Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879261 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd"} Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879821 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da"} Feb 18 14:54:59 crc kubenswrapper[4739]: I0218 14:54:59.879846 4739 scope.go:117] "RemoveContainer" containerID="c44b63a41008d49723c52fef63f57d42280fec125dd31e34e381b869df8587d6" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.464274 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:55:15 crc kubenswrapper[4739]: E0218 14:55:15.465484 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.465505 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.465807 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bf8a46-92c1-4b2e-9b8c-8206c618b98a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.468705 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.480730 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.527411 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.527592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.527646 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.629974 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630100 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630715 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.630766 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.675072 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"redhat-operators-jw2pk\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:15 crc kubenswrapper[4739]: I0218 14:55:15.789704 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:16 crc kubenswrapper[4739]: I0218 14:55:16.349486 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.090042 4739 generic.go:334] "Generic (PLEG): container finished" podID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" exitCode=0 Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.090124 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d"} Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.090570 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerStarted","Data":"d7c1c1de4dc9d0ec875b26c0571eaea101b3e4818dfa0680d7d39593a5b81682"} Feb 18 14:55:17 crc kubenswrapper[4739]: I0218 14:55:17.094470 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 14:55:20 crc kubenswrapper[4739]: I0218 14:55:20.131144 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerStarted","Data":"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074"} Feb 18 14:55:26 crc kubenswrapper[4739]: I0218 14:55:26.204710 4739 generic.go:334] "Generic (PLEG): container finished" podID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" exitCode=0 Feb 18 14:55:26 crc kubenswrapper[4739]: I0218 14:55:26.204746 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074"} Feb 18 14:55:27 crc kubenswrapper[4739]: I0218 14:55:27.218343 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerStarted","Data":"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1"} Feb 18 14:55:27 crc kubenswrapper[4739]: I0218 14:55:27.247838 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jw2pk" podStartSLOduration=2.705990508 podStartE2EDuration="12.247820383s" podCreationTimestamp="2026-02-18 14:55:15 +0000 UTC" firstStartedPulling="2026-02-18 14:55:17.09419291 +0000 UTC m=+3349.589913832" lastFinishedPulling="2026-02-18 14:55:26.636022785 +0000 UTC m=+3359.131743707" observedRunningTime="2026-02-18 14:55:27.23856055 +0000 UTC m=+3359.734281502" watchObservedRunningTime="2026-02-18 14:55:27.247820383 +0000 UTC m=+3359.743541305" Feb 18 14:55:35 crc kubenswrapper[4739]: I0218 14:55:35.789962 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:35 crc kubenswrapper[4739]: I0218 14:55:35.790741 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:55:36 crc kubenswrapper[4739]: I0218 14:55:36.845760 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:36 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:36 crc kubenswrapper[4739]: > Feb 18 14:55:46 crc kubenswrapper[4739]: I0218 14:55:46.847591 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:46 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:46 crc kubenswrapper[4739]: > Feb 18 14:55:56 crc kubenswrapper[4739]: I0218 14:55:56.844601 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" probeResult="failure" output=< Feb 18 14:55:56 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:55:56 crc kubenswrapper[4739]: > Feb 18 14:56:05 crc kubenswrapper[4739]: I0218 14:56:05.839725 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:05 crc kubenswrapper[4739]: I0218 14:56:05.900357 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:06 crc kubenswrapper[4739]: I0218 14:56:06.085631 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:56:07 crc kubenswrapper[4739]: I0218 14:56:07.744968 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jw2pk" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" containerID="cri-o://a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" gracePeriod=2 Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.334835 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.486986 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") pod \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.487115 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") pod \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.487233 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") pod \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\" (UID: \"c65e9c1e-6895-4ddc-b74a-c424fea4c24d\") " Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.488028 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities" (OuterVolumeSpecName: "utilities") pod "c65e9c1e-6895-4ddc-b74a-c424fea4c24d" (UID: "c65e9c1e-6895-4ddc-b74a-c424fea4c24d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.489265 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.494780 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp" (OuterVolumeSpecName: "kube-api-access-kc8wp") pod "c65e9c1e-6895-4ddc-b74a-c424fea4c24d" (UID: "c65e9c1e-6895-4ddc-b74a-c424fea4c24d"). InnerVolumeSpecName "kube-api-access-kc8wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.591377 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc8wp\" (UniqueName: \"kubernetes.io/projected/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-kube-api-access-kc8wp\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.659923 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c65e9c1e-6895-4ddc-b74a-c424fea4c24d" (UID: "c65e9c1e-6895-4ddc-b74a-c424fea4c24d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.693975 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65e9c1e-6895-4ddc-b74a-c424fea4c24d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757028 4739 generic.go:334] "Generic (PLEG): container finished" podID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" exitCode=0 Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757080 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1"} Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757115 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jw2pk" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757132 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jw2pk" event={"ID":"c65e9c1e-6895-4ddc-b74a-c424fea4c24d","Type":"ContainerDied","Data":"d7c1c1de4dc9d0ec875b26c0571eaea101b3e4818dfa0680d7d39593a5b81682"} Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.757153 4739 scope.go:117] "RemoveContainer" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.784955 4739 scope.go:117] "RemoveContainer" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.802246 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.814222 4739 scope.go:117] "RemoveContainer" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.816469 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jw2pk"] Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.880036 4739 scope.go:117] "RemoveContainer" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" Feb 18 14:56:08 crc kubenswrapper[4739]: E0218 14:56:08.880755 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1\": container with ID starting with a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1 not found: ID does not exist" containerID="a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.880833 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1"} err="failed to get container status \"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1\": rpc error: code = NotFound desc = could not find container \"a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1\": container with ID starting with a136c8c0931b493515cbdb16e0e60c67b9e61d94ecffdbe2a3ec505086d878c1 not found: ID does not exist" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.880891 4739 scope.go:117] "RemoveContainer" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" Feb 18 14:56:08 crc kubenswrapper[4739]: E0218 14:56:08.881472 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074\": container with ID starting with 6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074 not found: ID does not exist" containerID="6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.881622 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074"} err="failed to get container status \"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074\": rpc error: code = NotFound desc = could not find container \"6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074\": container with ID starting with 6c7450852686b48c6ed2b63ba52bf36b92bee6150626d660f327328f10074074 not found: ID does not exist" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.881722 4739 scope.go:117] "RemoveContainer" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" Feb 18 14:56:08 crc kubenswrapper[4739]: E0218 14:56:08.882287 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d\": container with ID starting with 75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d not found: ID does not exist" containerID="75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d" Feb 18 14:56:08 crc kubenswrapper[4739]: I0218 14:56:08.882325 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d"} err="failed to get container status \"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d\": rpc error: code = NotFound desc = could not find container \"75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d\": container with ID starting with 75c448edd520d793378d564d5231dd98a90ddc5aa490b5f61489057e12e4ba4d not found: ID does not exist" Feb 18 14:56:10 crc kubenswrapper[4739]: I0218 14:56:10.425283 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" path="/var/lib/kubelet/pods/c65e9c1e-6895-4ddc-b74a-c424fea4c24d/volumes" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.585822 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:26 crc kubenswrapper[4739]: E0218 14:56:26.586726 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-utilities" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.586740 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-utilities" Feb 18 14:56:26 crc kubenswrapper[4739]: E0218 14:56:26.586756 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.586762 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" Feb 18 14:56:26 crc kubenswrapper[4739]: E0218 14:56:26.586776 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-content" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.586781 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="extract-content" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.587033 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="c65e9c1e-6895-4ddc-b74a-c424fea4c24d" containerName="registry-server" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.588731 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.600347 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.735754 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phl5\" (UniqueName: \"kubernetes.io/projected/f143bfcf-f351-4ede-ab73-311c97dcb20d-kube-api-access-6phl5\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.735848 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-utilities\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.735886 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-catalog-content\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.838053 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phl5\" (UniqueName: \"kubernetes.io/projected/f143bfcf-f351-4ede-ab73-311c97dcb20d-kube-api-access-6phl5\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.838496 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-utilities\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.838646 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-catalog-content\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.839028 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-utilities\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.839092 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f143bfcf-f351-4ede-ab73-311c97dcb20d-catalog-content\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.867143 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phl5\" (UniqueName: \"kubernetes.io/projected/f143bfcf-f351-4ede-ab73-311c97dcb20d-kube-api-access-6phl5\") pod \"community-operators-fmqk2\" (UID: \"f143bfcf-f351-4ede-ab73-311c97dcb20d\") " pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:26 crc kubenswrapper[4739]: I0218 14:56:26.919020 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.494840 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.951038 4739 generic.go:334] "Generic (PLEG): container finished" podID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerID="d68ad1d18d91197ec1f8e84e10ae66569f5c214d767791ef0a18af0cd8d3237b" exitCode=0 Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.951148 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerDied","Data":"d68ad1d18d91197ec1f8e84e10ae66569f5c214d767791ef0a18af0cd8d3237b"} Feb 18 14:56:27 crc kubenswrapper[4739]: I0218 14:56:27.951345 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerStarted","Data":"0fc713da6fd348d2f8ab44a4aedd5b4a245f74b3dd2f7484d6e521dbc02aed14"} Feb 18 14:56:36 crc kubenswrapper[4739]: I0218 14:56:36.032514 4739 generic.go:334] "Generic (PLEG): container finished" podID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerID="3db33673d7628b06fbaba06cd09a2912b4ea7614af78f4dbe9f17b3b037b7284" exitCode=0 Feb 18 14:56:36 crc kubenswrapper[4739]: I0218 14:56:36.033044 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerDied","Data":"3db33673d7628b06fbaba06cd09a2912b4ea7614af78f4dbe9f17b3b037b7284"} Feb 18 14:56:37 crc kubenswrapper[4739]: I0218 14:56:37.047857 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmqk2" event={"ID":"f143bfcf-f351-4ede-ab73-311c97dcb20d","Type":"ContainerStarted","Data":"720086e6b307316c40afce7265cd05ecc4ba0790375e277ad74c6aaad6364bed"} Feb 18 14:56:37 crc kubenswrapper[4739]: I0218 14:56:37.086974 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fmqk2" podStartSLOduration=2.60712645 podStartE2EDuration="11.086952558s" podCreationTimestamp="2026-02-18 14:56:26 +0000 UTC" firstStartedPulling="2026-02-18 14:56:27.953089541 +0000 UTC m=+3420.448810463" lastFinishedPulling="2026-02-18 14:56:36.432915649 +0000 UTC m=+3428.928636571" observedRunningTime="2026-02-18 14:56:37.076218618 +0000 UTC m=+3429.571939550" watchObservedRunningTime="2026-02-18 14:56:37.086952558 +0000 UTC m=+3429.582673480" Feb 18 14:56:46 crc kubenswrapper[4739]: I0218 14:56:46.920350 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:46 crc kubenswrapper[4739]: I0218 14:56:46.921345 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:46 crc kubenswrapper[4739]: I0218 14:56:46.968929 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.211123 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fmqk2" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.282648 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmqk2"] Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.319476 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.319707 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94tzm" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" containerID="cri-o://07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" gracePeriod=2 Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.865974 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.966727 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") pod \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.966959 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") pod \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.967011 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") pod \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\" (UID: \"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f\") " Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.968165 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities" (OuterVolumeSpecName: "utilities") pod "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" (UID: "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:47 crc kubenswrapper[4739]: I0218 14:56:47.978286 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff" (OuterVolumeSpecName: "kube-api-access-lvzff") pod "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" (UID: "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f"). InnerVolumeSpecName "kube-api-access-lvzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.039391 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" (UID: "3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.070042 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvzff\" (UniqueName: \"kubernetes.io/projected/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-kube-api-access-lvzff\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.070333 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.070344 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.174542 4739 generic.go:334] "Generic (PLEG): container finished" podID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" exitCode=0 Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.175288 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94tzm" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.180364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689"} Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.180422 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94tzm" event={"ID":"3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f","Type":"ContainerDied","Data":"9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c"} Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.180455 4739 scope.go:117] "RemoveContainer" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.238252 4739 scope.go:117] "RemoveContainer" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.250714 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.277757 4739 scope.go:117] "RemoveContainer" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.300545 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94tzm"] Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.301770 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice/crio-9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice\": RecentStats: unable to find data in memory cache]" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.302188 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb1fe47_cb9e_4538_9fc8_a6e75ac4279f.slice/crio-9db4c60d6322480e701f551598fedffb94eb253b0f0fc2549d5772b70af9210c\": RecentStats: unable to find data in memory cache]" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.334573 4739 scope.go:117] "RemoveContainer" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.335443 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689\": container with ID starting with 07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689 not found: ID does not exist" containerID="07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.339625 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689"} err="failed to get container status \"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689\": rpc error: code = NotFound desc = could not find container \"07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689\": container with ID starting with 07474b55eb9bc5ed3c33596df4869e510262c8331c9b524667dcc2a16bd56689 not found: ID does not exist" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.339662 4739 scope.go:117] "RemoveContainer" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.340583 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548\": container with ID starting with 20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548 not found: ID does not exist" containerID="20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.340654 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548"} err="failed to get container status \"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548\": rpc error: code = NotFound desc = could not find container \"20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548\": container with ID starting with 20ed1693da7b48e3233b021e00faeb52a068d3b6e995b6ca84280467ac46b548 not found: ID does not exist" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.340687 4739 scope.go:117] "RemoveContainer" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" Feb 18 14:56:48 crc kubenswrapper[4739]: E0218 14:56:48.346297 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f\": container with ID starting with 18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f not found: ID does not exist" containerID="18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.346362 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f"} err="failed to get container status \"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f\": rpc error: code = NotFound desc = could not find container \"18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f\": container with ID starting with 18a249ca987a1ebbb58305862051507b7e7af51d7b66dfb11920eefffec1ed3f not found: ID does not exist" Feb 18 14:56:48 crc kubenswrapper[4739]: I0218 14:56:48.429788 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" path="/var/lib/kubelet/pods/3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f/volumes" Feb 18 14:56:59 crc kubenswrapper[4739]: I0218 14:56:59.372675 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:56:59 crc kubenswrapper[4739]: I0218 14:56:59.373156 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:57:29 crc kubenswrapper[4739]: I0218 14:57:29.372670 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:57:29 crc kubenswrapper[4739]: I0218 14:57:29.373488 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.006640 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:47 crc kubenswrapper[4739]: E0218 14:57:47.008804 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-content" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.008921 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-content" Feb 18 14:57:47 crc kubenswrapper[4739]: E0218 14:57:47.009018 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.009124 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" Feb 18 14:57:47 crc kubenswrapper[4739]: E0218 14:57:47.009228 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-utilities" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.009323 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="extract-utilities" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.009988 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fb1fe47-cb9e-4538-9fc8-a6e75ac4279f" containerName="registry-server" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.014061 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.030938 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.048014 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.048130 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.048173 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.150376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.150472 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.150688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.151405 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.151416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.173636 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"redhat-marketplace-glcp4\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.341133 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:47 crc kubenswrapper[4739]: I0218 14:57:47.882310 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:48 crc kubenswrapper[4739]: I0218 14:57:48.812414 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" exitCode=0 Feb 18 14:57:48 crc kubenswrapper[4739]: I0218 14:57:48.812511 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68"} Feb 18 14:57:48 crc kubenswrapper[4739]: I0218 14:57:48.812889 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerStarted","Data":"2b9fcd42928c5809815eece70ea95f15c39ba7a2d4e00ab4a2c466ec692b62f7"} Feb 18 14:57:49 crc kubenswrapper[4739]: I0218 14:57:49.831344 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerStarted","Data":"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b"} Feb 18 14:57:50 crc kubenswrapper[4739]: I0218 14:57:50.845211 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" exitCode=0 Feb 18 14:57:50 crc kubenswrapper[4739]: I0218 14:57:50.845309 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b"} Feb 18 14:57:51 crc kubenswrapper[4739]: I0218 14:57:51.862091 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerStarted","Data":"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16"} Feb 18 14:57:51 crc kubenswrapper[4739]: I0218 14:57:51.886633 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-glcp4" podStartSLOduration=3.405724562 podStartE2EDuration="5.886613481s" podCreationTimestamp="2026-02-18 14:57:46 +0000 UTC" firstStartedPulling="2026-02-18 14:57:48.815597241 +0000 UTC m=+3501.311318183" lastFinishedPulling="2026-02-18 14:57:51.29648618 +0000 UTC m=+3503.792207102" observedRunningTime="2026-02-18 14:57:51.877936383 +0000 UTC m=+3504.373657305" watchObservedRunningTime="2026-02-18 14:57:51.886613481 +0000 UTC m=+3504.382334413" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.342011 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.342417 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.391217 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:57 crc kubenswrapper[4739]: I0218 14:57:57.973994 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:57:58 crc kubenswrapper[4739]: I0218 14:57:58.030165 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.373100 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.373173 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.373225 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.374080 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.374135 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" gracePeriod=600 Feb 18 14:57:59 crc kubenswrapper[4739]: E0218 14:57:59.491593 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943101 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" exitCode=0 Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943670 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-glcp4" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" containerID="cri-o://3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" gracePeriod=2 Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da"} Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.943793 4739 scope.go:117] "RemoveContainer" containerID="626c3d9491b2d461f2086323694bdf72c0f1d12e52fb2ce99a533efc05c818dd" Feb 18 14:57:59 crc kubenswrapper[4739]: I0218 14:57:59.944794 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:57:59 crc kubenswrapper[4739]: E0218 14:57:59.945271 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.486738 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.575370 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") pod \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.575586 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") pod \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.575639 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") pod \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\" (UID: \"6a4f0075-4eb5-40d8-918f-26e4975b18e0\") " Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.576937 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities" (OuterVolumeSpecName: "utilities") pod "6a4f0075-4eb5-40d8-918f-26e4975b18e0" (UID: "6a4f0075-4eb5-40d8-918f-26e4975b18e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.582791 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz" (OuterVolumeSpecName: "kube-api-access-7bssz") pod "6a4f0075-4eb5-40d8-918f-26e4975b18e0" (UID: "6a4f0075-4eb5-40d8-918f-26e4975b18e0"). InnerVolumeSpecName "kube-api-access-7bssz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.601348 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a4f0075-4eb5-40d8-918f-26e4975b18e0" (UID: "6a4f0075-4eb5-40d8-918f-26e4975b18e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.679243 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.679509 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4f0075-4eb5-40d8-918f-26e4975b18e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.679622 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bssz\" (UniqueName: \"kubernetes.io/projected/6a4f0075-4eb5-40d8-918f-26e4975b18e0-kube-api-access-7bssz\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955198 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" exitCode=0 Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955296 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-glcp4" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955304 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16"} Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955423 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-glcp4" event={"ID":"6a4f0075-4eb5-40d8-918f-26e4975b18e0","Type":"ContainerDied","Data":"2b9fcd42928c5809815eece70ea95f15c39ba7a2d4e00ab4a2c466ec692b62f7"} Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.955466 4739 scope.go:117] "RemoveContainer" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.983459 4739 scope.go:117] "RemoveContainer" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" Feb 18 14:58:00 crc kubenswrapper[4739]: I0218 14:58:00.999190 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.011771 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-glcp4"] Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.029948 4739 scope.go:117] "RemoveContainer" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.068111 4739 scope.go:117] "RemoveContainer" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" Feb 18 14:58:01 crc kubenswrapper[4739]: E0218 14:58:01.068875 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16\": container with ID starting with 3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16 not found: ID does not exist" containerID="3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.068927 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16"} err="failed to get container status \"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16\": rpc error: code = NotFound desc = could not find container \"3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16\": container with ID starting with 3403793464cbfe0d8c30242fa543d79d4d3c0a53c3a998bfa034d896608a2e16 not found: ID does not exist" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.068962 4739 scope.go:117] "RemoveContainer" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" Feb 18 14:58:01 crc kubenswrapper[4739]: E0218 14:58:01.069243 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b\": container with ID starting with bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b not found: ID does not exist" containerID="bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.069328 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b"} err="failed to get container status \"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b\": rpc error: code = NotFound desc = could not find container \"bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b\": container with ID starting with bfce1601b4be40a26dc3b3301932a3ed155d2dca86d09078fc3d1305d098881b not found: ID does not exist" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.069408 4739 scope.go:117] "RemoveContainer" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" Feb 18 14:58:01 crc kubenswrapper[4739]: E0218 14:58:01.069738 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68\": container with ID starting with 4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68 not found: ID does not exist" containerID="4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68" Feb 18 14:58:01 crc kubenswrapper[4739]: I0218 14:58:01.069772 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68"} err="failed to get container status \"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68\": rpc error: code = NotFound desc = could not find container \"4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68\": container with ID starting with 4986ea510038d2194eb6eda70381c01dca6cd89e7477075d4fd987feda6b3f68 not found: ID does not exist" Feb 18 14:58:02 crc kubenswrapper[4739]: I0218 14:58:02.425479 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" path="/var/lib/kubelet/pods/6a4f0075-4eb5-40d8-918f-26e4975b18e0/volumes" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.693398 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:10 crc kubenswrapper[4739]: E0218 14:58:10.694295 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-content" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694307 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-content" Feb 18 14:58:10 crc kubenswrapper[4739]: E0218 14:58:10.694337 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-utilities" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694345 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="extract-utilities" Feb 18 14:58:10 crc kubenswrapper[4739]: E0218 14:58:10.694359 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694365 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.694772 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4f0075-4eb5-40d8-918f-26e4975b18e0" containerName="registry-server" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.696394 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.723302 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.829104 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.829311 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.829549 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.931794 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.931866 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.931913 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.932421 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.932544 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:10 crc kubenswrapper[4739]: I0218 14:58:10.957500 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"certified-operators-xvz72\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:11 crc kubenswrapper[4739]: I0218 14:58:11.021537 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:11 crc kubenswrapper[4739]: I0218 14:58:11.585847 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:12 crc kubenswrapper[4739]: I0218 14:58:12.134093 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" exitCode=0 Feb 18 14:58:12 crc kubenswrapper[4739]: I0218 14:58:12.134188 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50"} Feb 18 14:58:12 crc kubenswrapper[4739]: I0218 14:58:12.134411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerStarted","Data":"c535dbd06706ccfeb23e10164229142ffb3298e7e455d6c293695b16f23adee8"} Feb 18 14:58:13 crc kubenswrapper[4739]: I0218 14:58:13.146364 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerStarted","Data":"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537"} Feb 18 14:58:15 crc kubenswrapper[4739]: I0218 14:58:15.168632 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" exitCode=0 Feb 18 14:58:15 crc kubenswrapper[4739]: I0218 14:58:15.168985 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537"} Feb 18 14:58:15 crc kubenswrapper[4739]: I0218 14:58:15.410967 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:15 crc kubenswrapper[4739]: E0218 14:58:15.411382 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:16 crc kubenswrapper[4739]: I0218 14:58:16.180753 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerStarted","Data":"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308"} Feb 18 14:58:16 crc kubenswrapper[4739]: I0218 14:58:16.198719 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvz72" podStartSLOduration=2.789859708 podStartE2EDuration="6.1986987s" podCreationTimestamp="2026-02-18 14:58:10 +0000 UTC" firstStartedPulling="2026-02-18 14:58:12.136475324 +0000 UTC m=+3524.632196256" lastFinishedPulling="2026-02-18 14:58:15.545314326 +0000 UTC m=+3528.041035248" observedRunningTime="2026-02-18 14:58:16.197798118 +0000 UTC m=+3528.693519060" watchObservedRunningTime="2026-02-18 14:58:16.1986987 +0000 UTC m=+3528.694419622" Feb 18 14:58:21 crc kubenswrapper[4739]: I0218 14:58:21.022430 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:21 crc kubenswrapper[4739]: I0218 14:58:21.022876 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:22 crc kubenswrapper[4739]: I0218 14:58:22.085692 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xvz72" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" probeResult="failure" output=< Feb 18 14:58:22 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 14:58:22 crc kubenswrapper[4739]: > Feb 18 14:58:29 crc kubenswrapper[4739]: I0218 14:58:29.410824 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:29 crc kubenswrapper[4739]: E0218 14:58:29.411929 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:31 crc kubenswrapper[4739]: I0218 14:58:31.102328 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:31 crc kubenswrapper[4739]: I0218 14:58:31.160739 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:31 crc kubenswrapper[4739]: I0218 14:58:31.344664 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:32 crc kubenswrapper[4739]: I0218 14:58:32.357048 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvz72" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" containerID="cri-o://f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" gracePeriod=2 Feb 18 14:58:32 crc kubenswrapper[4739]: I0218 14:58:32.951434 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.072840 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") pod \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.073742 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") pod \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.073851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") pod \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\" (UID: \"7d2e5425-8a4c-4e24-ab8a-310311b52e64\") " Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.073859 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities" (OuterVolumeSpecName: "utilities") pod "7d2e5425-8a4c-4e24-ab8a-310311b52e64" (UID: "7d2e5425-8a4c-4e24-ab8a-310311b52e64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.075133 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.080118 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96" (OuterVolumeSpecName: "kube-api-access-pvf96") pod "7d2e5425-8a4c-4e24-ab8a-310311b52e64" (UID: "7d2e5425-8a4c-4e24-ab8a-310311b52e64"). InnerVolumeSpecName "kube-api-access-pvf96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.123978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d2e5425-8a4c-4e24-ab8a-310311b52e64" (UID: "7d2e5425-8a4c-4e24-ab8a-310311b52e64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.177210 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2e5425-8a4c-4e24-ab8a-310311b52e64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.177247 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvf96\" (UniqueName: \"kubernetes.io/projected/7d2e5425-8a4c-4e24-ab8a-310311b52e64-kube-api-access-pvf96\") on node \"crc\" DevicePath \"\"" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371384 4739 generic.go:334] "Generic (PLEG): container finished" podID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" exitCode=0 Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308"} Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371477 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvz72" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371505 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvz72" event={"ID":"7d2e5425-8a4c-4e24-ab8a-310311b52e64","Type":"ContainerDied","Data":"c535dbd06706ccfeb23e10164229142ffb3298e7e455d6c293695b16f23adee8"} Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.371526 4739 scope.go:117] "RemoveContainer" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.399601 4739 scope.go:117] "RemoveContainer" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.411376 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.421994 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvz72"] Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.438499 4739 scope.go:117] "RemoveContainer" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.479944 4739 scope.go:117] "RemoveContainer" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" Feb 18 14:58:33 crc kubenswrapper[4739]: E0218 14:58:33.480584 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308\": container with ID starting with f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308 not found: ID does not exist" containerID="f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.480626 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308"} err="failed to get container status \"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308\": rpc error: code = NotFound desc = could not find container \"f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308\": container with ID starting with f1fc633c817c60fed9fd16d08425612525911b5a27c51f6510b05959b10df308 not found: ID does not exist" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.480653 4739 scope.go:117] "RemoveContainer" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" Feb 18 14:58:33 crc kubenswrapper[4739]: E0218 14:58:33.483308 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537\": container with ID starting with 6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537 not found: ID does not exist" containerID="6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.483346 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537"} err="failed to get container status \"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537\": rpc error: code = NotFound desc = could not find container \"6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537\": container with ID starting with 6a1e9d0ca4dc8e180f7a7642b9f69d4bb7e25d268e985ac4ec861b81c58ff537 not found: ID does not exist" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.483376 4739 scope.go:117] "RemoveContainer" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" Feb 18 14:58:33 crc kubenswrapper[4739]: E0218 14:58:33.483768 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50\": container with ID starting with f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50 not found: ID does not exist" containerID="f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50" Feb 18 14:58:33 crc kubenswrapper[4739]: I0218 14:58:33.483815 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50"} err="failed to get container status \"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50\": rpc error: code = NotFound desc = could not find container \"f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50\": container with ID starting with f4984d219e486151dc8099b9c09b7ee74622c83cc95aabc3a8403ef7a6585c50 not found: ID does not exist" Feb 18 14:58:34 crc kubenswrapper[4739]: I0218 14:58:34.422945 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" path="/var/lib/kubelet/pods/7d2e5425-8a4c-4e24-ab8a-310311b52e64/volumes" Feb 18 14:58:41 crc kubenswrapper[4739]: I0218 14:58:41.411330 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:41 crc kubenswrapper[4739]: E0218 14:58:41.412365 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:58:53 crc kubenswrapper[4739]: I0218 14:58:53.410562 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:58:53 crc kubenswrapper[4739]: E0218 14:58:53.411419 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:05 crc kubenswrapper[4739]: I0218 14:59:05.410794 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:05 crc kubenswrapper[4739]: E0218 14:59:05.411633 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:16 crc kubenswrapper[4739]: I0218 14:59:16.411096 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:16 crc kubenswrapper[4739]: E0218 14:59:16.411935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:27 crc kubenswrapper[4739]: I0218 14:59:27.410918 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:27 crc kubenswrapper[4739]: E0218 14:59:27.411687 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:40 crc kubenswrapper[4739]: I0218 14:59:40.410582 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:40 crc kubenswrapper[4739]: E0218 14:59:40.411525 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 14:59:52 crc kubenswrapper[4739]: I0218 14:59:52.411511 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 14:59:52 crc kubenswrapper[4739]: E0218 14:59:52.412310 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.165592 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr"] Feb 18 15:00:00 crc kubenswrapper[4739]: E0218 15:00:00.166698 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-utilities" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.166777 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-utilities" Feb 18 15:00:00 crc kubenswrapper[4739]: E0218 15:00:00.166802 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-content" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.166810 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="extract-content" Feb 18 15:00:00 crc kubenswrapper[4739]: E0218 15:00:00.166846 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.166856 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.167146 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2e5425-8a4c-4e24-ab8a-310311b52e64" containerName="registry-server" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.168027 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.171309 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.171623 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.177737 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr"] Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.262129 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.262585 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.262697 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.366945 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.367120 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.367205 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.368358 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.373614 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.386683 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"collect-profiles-29523780-x7zzr\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.496384 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:00 crc kubenswrapper[4739]: I0218 15:00:00.997026 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr"] Feb 18 15:00:01 crc kubenswrapper[4739]: I0218 15:00:01.416657 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerStarted","Data":"4b2109d2b88bdccb1a25270c62d7be3a7ff8386c84518e3266ea3427cd1d517b"} Feb 18 15:00:01 crc kubenswrapper[4739]: I0218 15:00:01.417852 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerStarted","Data":"a86f739d4db017fb3ab973a7c91ce9e87f8a45548da3a3625143d544ccb633d5"} Feb 18 15:00:01 crc kubenswrapper[4739]: I0218 15:00:01.466182 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" podStartSLOduration=1.466162739 podStartE2EDuration="1.466162739s" podCreationTimestamp="2026-02-18 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:00:01.445758514 +0000 UTC m=+3633.941479436" watchObservedRunningTime="2026-02-18 15:00:01.466162739 +0000 UTC m=+3633.961883661" Feb 18 15:00:02 crc kubenswrapper[4739]: I0218 15:00:02.429307 4739 generic.go:334] "Generic (PLEG): container finished" podID="728976dc-da2b-4408-895b-a95d93c23eaa" containerID="4b2109d2b88bdccb1a25270c62d7be3a7ff8386c84518e3266ea3427cd1d517b" exitCode=0 Feb 18 15:00:02 crc kubenswrapper[4739]: I0218 15:00:02.429366 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerDied","Data":"4b2109d2b88bdccb1a25270c62d7be3a7ff8386c84518e3266ea3427cd1d517b"} Feb 18 15:00:03 crc kubenswrapper[4739]: I0218 15:00:03.411574 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:03 crc kubenswrapper[4739]: E0218 15:00:03.412227 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.024233 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.090346 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") pod \"728976dc-da2b-4408-895b-a95d93c23eaa\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.090967 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") pod \"728976dc-da2b-4408-895b-a95d93c23eaa\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.091076 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") pod \"728976dc-da2b-4408-895b-a95d93c23eaa\" (UID: \"728976dc-da2b-4408-895b-a95d93c23eaa\") " Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.091990 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume" (OuterVolumeSpecName: "config-volume") pod "728976dc-da2b-4408-895b-a95d93c23eaa" (UID: "728976dc-da2b-4408-895b-a95d93c23eaa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.098008 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "728976dc-da2b-4408-895b-a95d93c23eaa" (UID: "728976dc-da2b-4408-895b-a95d93c23eaa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.098143 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg" (OuterVolumeSpecName: "kube-api-access-z5lqg") pod "728976dc-da2b-4408-895b-a95d93c23eaa" (UID: "728976dc-da2b-4408-895b-a95d93c23eaa"). InnerVolumeSpecName "kube-api-access-z5lqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.194218 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5lqg\" (UniqueName: \"kubernetes.io/projected/728976dc-da2b-4408-895b-a95d93c23eaa-kube-api-access-z5lqg\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.194279 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/728976dc-da2b-4408-895b-a95d93c23eaa-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.194294 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/728976dc-da2b-4408-895b-a95d93c23eaa-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.873974 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" event={"ID":"728976dc-da2b-4408-895b-a95d93c23eaa","Type":"ContainerDied","Data":"a86f739d4db017fb3ab973a7c91ce9e87f8a45548da3a3625143d544ccb633d5"} Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.874019 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a86f739d4db017fb3ab973a7c91ce9e87f8a45548da3a3625143d544ccb633d5" Feb 18 15:00:06 crc kubenswrapper[4739]: I0218 15:00:06.874079 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523780-x7zzr" Feb 18 15:00:07 crc kubenswrapper[4739]: I0218 15:00:07.106125 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 15:00:07 crc kubenswrapper[4739]: I0218 15:00:07.117043 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523735-tpw9l"] Feb 18 15:00:08 crc kubenswrapper[4739]: I0218 15:00:08.426832 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2918ab-f9b2-46b1-9895-7de44312e98e" path="/var/lib/kubelet/pods/8c2918ab-f9b2-46b1-9895-7de44312e98e/volumes" Feb 18 15:00:18 crc kubenswrapper[4739]: I0218 15:00:18.420929 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:18 crc kubenswrapper[4739]: E0218 15:00:18.421794 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:29 crc kubenswrapper[4739]: I0218 15:00:29.410626 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:29 crc kubenswrapper[4739]: E0218 15:00:29.411411 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:37 crc kubenswrapper[4739]: I0218 15:00:37.844582 4739 scope.go:117] "RemoveContainer" containerID="a63b0fe82e01dc057994e21049631942cf32124ffb8f8b9b2acf4cf4375ae993" Feb 18 15:00:41 crc kubenswrapper[4739]: I0218 15:00:41.411538 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:41 crc kubenswrapper[4739]: E0218 15:00:41.412456 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:00:52 crc kubenswrapper[4739]: I0218 15:00:52.410804 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:00:52 crc kubenswrapper[4739]: E0218 15:00:52.411538 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.152680 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523781-z64zk"] Feb 18 15:01:00 crc kubenswrapper[4739]: E0218 15:01:00.153985 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728976dc-da2b-4408-895b-a95d93c23eaa" containerName="collect-profiles" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.154004 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="728976dc-da2b-4408-895b-a95d93c23eaa" containerName="collect-profiles" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.154259 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="728976dc-da2b-4408-895b-a95d93c23eaa" containerName="collect-profiles" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.155161 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.183608 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523781-z64zk"] Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.251901 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.251976 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.252119 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.252189 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.354815 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.354875 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.354985 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.355065 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.362123 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.362153 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.364708 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.373322 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"keystone-cron-29523781-z64zk\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.477015 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:00 crc kubenswrapper[4739]: I0218 15:01:00.971814 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523781-z64zk"] Feb 18 15:01:01 crc kubenswrapper[4739]: I0218 15:01:01.458192 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerStarted","Data":"143c4a05a618f2ea88fdf0a7c23dcb1be159d0801ceab94582f7c94766c5f06f"} Feb 18 15:01:01 crc kubenswrapper[4739]: I0218 15:01:01.458531 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerStarted","Data":"e9ddf44b0aefad5f9fe9a71113008b11ece17f69b1425c1ef2033929a919afe3"} Feb 18 15:01:01 crc kubenswrapper[4739]: I0218 15:01:01.481370 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523781-z64zk" podStartSLOduration=1.48135139 podStartE2EDuration="1.48135139s" podCreationTimestamp="2026-02-18 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:01:01.472796154 +0000 UTC m=+3693.968517086" watchObservedRunningTime="2026-02-18 15:01:01.48135139 +0000 UTC m=+3693.977072312" Feb 18 15:01:06 crc kubenswrapper[4739]: I0218 15:01:06.411044 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:06 crc kubenswrapper[4739]: E0218 15:01:06.411936 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:06 crc kubenswrapper[4739]: I0218 15:01:06.510549 4739 generic.go:334] "Generic (PLEG): container finished" podID="28825764-dace-4769-b71e-4d55b8aa1d97" containerID="143c4a05a618f2ea88fdf0a7c23dcb1be159d0801ceab94582f7c94766c5f06f" exitCode=0 Feb 18 15:01:06 crc kubenswrapper[4739]: I0218 15:01:06.510598 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerDied","Data":"143c4a05a618f2ea88fdf0a7c23dcb1be159d0801ceab94582f7c94766c5f06f"} Feb 18 15:01:07 crc kubenswrapper[4739]: I0218 15:01:07.928869 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.039755 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.039851 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.040029 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.040098 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") pod \"28825764-dace-4769-b71e-4d55b8aa1d97\" (UID: \"28825764-dace-4769-b71e-4d55b8aa1d97\") " Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.045993 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9" (OuterVolumeSpecName: "kube-api-access-mczs9") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "kube-api-access-mczs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.046465 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.071308 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.098981 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data" (OuterVolumeSpecName: "config-data") pod "28825764-dace-4769-b71e-4d55b8aa1d97" (UID: "28825764-dace-4769-b71e-4d55b8aa1d97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143159 4739 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143193 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143203 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mczs9\" (UniqueName: \"kubernetes.io/projected/28825764-dace-4769-b71e-4d55b8aa1d97-kube-api-access-mczs9\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.143213 4739 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28825764-dace-4769-b71e-4d55b8aa1d97-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.530567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523781-z64zk" event={"ID":"28825764-dace-4769-b71e-4d55b8aa1d97","Type":"ContainerDied","Data":"e9ddf44b0aefad5f9fe9a71113008b11ece17f69b1425c1ef2033929a919afe3"} Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.530596 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523781-z64zk" Feb 18 15:01:08 crc kubenswrapper[4739]: I0218 15:01:08.530609 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ddf44b0aefad5f9fe9a71113008b11ece17f69b1425c1ef2033929a919afe3" Feb 18 15:01:19 crc kubenswrapper[4739]: I0218 15:01:19.414575 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:19 crc kubenswrapper[4739]: E0218 15:01:19.415965 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:33 crc kubenswrapper[4739]: I0218 15:01:33.411365 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:33 crc kubenswrapper[4739]: E0218 15:01:33.412196 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:44 crc kubenswrapper[4739]: I0218 15:01:44.411169 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:44 crc kubenswrapper[4739]: E0218 15:01:44.411989 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:01:56 crc kubenswrapper[4739]: I0218 15:01:56.411964 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:01:56 crc kubenswrapper[4739]: E0218 15:01:56.415168 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:09 crc kubenswrapper[4739]: I0218 15:02:09.410257 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:09 crc kubenswrapper[4739]: E0218 15:02:09.410981 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:23 crc kubenswrapper[4739]: I0218 15:02:23.410904 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:23 crc kubenswrapper[4739]: E0218 15:02:23.411870 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:38 crc kubenswrapper[4739]: I0218 15:02:38.437248 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:38 crc kubenswrapper[4739]: E0218 15:02:38.438283 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:02:51 crc kubenswrapper[4739]: I0218 15:02:51.412346 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:02:51 crc kubenswrapper[4739]: E0218 15:02:51.413789 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:03:06 crc kubenswrapper[4739]: I0218 15:03:06.410358 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:03:07 crc kubenswrapper[4739]: I0218 15:03:07.701576 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b"} Feb 18 15:05:29 crc kubenswrapper[4739]: I0218 15:05:29.375030 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:05:29 crc kubenswrapper[4739]: I0218 15:05:29.375568 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:05:59 crc kubenswrapper[4739]: I0218 15:05:59.373543 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:05:59 crc kubenswrapper[4739]: I0218 15:05:59.374366 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.373426 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.374071 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.374127 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.375153 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.375225 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b" gracePeriod=600 Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891331 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b" exitCode=0 Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b"} Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891653 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863"} Feb 18 15:06:29 crc kubenswrapper[4739]: I0218 15:06:29.891683 4739 scope.go:117] "RemoveContainer" containerID="f9797b145568e44bdf4d0d3d9baf2c5cb09c9377c4c865085c5b2e44834877da" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.544877 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:06:48 crc kubenswrapper[4739]: E0218 15:06:48.546112 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28825764-dace-4769-b71e-4d55b8aa1d97" containerName="keystone-cron" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.546126 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="28825764-dace-4769-b71e-4d55b8aa1d97" containerName="keystone-cron" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.546356 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="28825764-dace-4769-b71e-4d55b8aa1d97" containerName="keystone-cron" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.548056 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.561957 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.673353 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.673521 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.673724 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.776688 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.776772 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.776997 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.777520 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.777575 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.805378 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"redhat-operators-chrlt\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:48 crc kubenswrapper[4739]: I0218 15:06:48.874557 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:49 crc kubenswrapper[4739]: I0218 15:06:49.435530 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.126434 4739 generic.go:334] "Generic (PLEG): container finished" podID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" exitCode=0 Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.126628 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9"} Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.127623 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerStarted","Data":"ca3bb1bb9a12bbf83e9057bedfb179dab26a7c891d3e1272b80769f2b56ebde9"} Feb 18 15:06:50 crc kubenswrapper[4739]: I0218 15:06:50.128939 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:06:51 crc kubenswrapper[4739]: I0218 15:06:51.139868 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerStarted","Data":"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c"} Feb 18 15:06:57 crc kubenswrapper[4739]: I0218 15:06:57.199923 4739 generic.go:334] "Generic (PLEG): container finished" podID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" exitCode=0 Feb 18 15:06:57 crc kubenswrapper[4739]: I0218 15:06:57.199956 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c"} Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.211123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerStarted","Data":"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93"} Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.233257 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chrlt" podStartSLOduration=2.45058507 podStartE2EDuration="10.233239812s" podCreationTimestamp="2026-02-18 15:06:48 +0000 UTC" firstStartedPulling="2026-02-18 15:06:50.128735091 +0000 UTC m=+4042.624456003" lastFinishedPulling="2026-02-18 15:06:57.911389823 +0000 UTC m=+4050.407110745" observedRunningTime="2026-02-18 15:06:58.230052182 +0000 UTC m=+4050.725773104" watchObservedRunningTime="2026-02-18 15:06:58.233239812 +0000 UTC m=+4050.728960734" Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.875712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:58 crc kubenswrapper[4739]: I0218 15:06:58.875812 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:06:59 crc kubenswrapper[4739]: I0218 15:06:59.930790 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:06:59 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:06:59 crc kubenswrapper[4739]: > Feb 18 15:07:09 crc kubenswrapper[4739]: I0218 15:07:09.929284 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:07:09 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:07:09 crc kubenswrapper[4739]: > Feb 18 15:07:19 crc kubenswrapper[4739]: I0218 15:07:19.929471 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:07:19 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:07:19 crc kubenswrapper[4739]: > Feb 18 15:07:29 crc kubenswrapper[4739]: I0218 15:07:29.927486 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:07:29 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:07:29 crc kubenswrapper[4739]: > Feb 18 15:07:38 crc kubenswrapper[4739]: I0218 15:07:38.935831 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:38 crc kubenswrapper[4739]: I0218 15:07:38.999412 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:39 crc kubenswrapper[4739]: I0218 15:07:39.176573 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:07:40 crc kubenswrapper[4739]: I0218 15:07:40.684684 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chrlt" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" containerID="cri-o://7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" gracePeriod=2 Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.231060 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.301057 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") pod \"164a424c-e71a-43f3-9f77-bb4fe38a744d\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.301362 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") pod \"164a424c-e71a-43f3-9f77-bb4fe38a744d\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.301431 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") pod \"164a424c-e71a-43f3-9f77-bb4fe38a744d\" (UID: \"164a424c-e71a-43f3-9f77-bb4fe38a744d\") " Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.302026 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities" (OuterVolumeSpecName: "utilities") pod "164a424c-e71a-43f3-9f77-bb4fe38a744d" (UID: "164a424c-e71a-43f3-9f77-bb4fe38a744d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.309856 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj" (OuterVolumeSpecName: "kube-api-access-5v4sj") pod "164a424c-e71a-43f3-9f77-bb4fe38a744d" (UID: "164a424c-e71a-43f3-9f77-bb4fe38a744d"). InnerVolumeSpecName "kube-api-access-5v4sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.311032 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.311079 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v4sj\" (UniqueName: \"kubernetes.io/projected/164a424c-e71a-43f3-9f77-bb4fe38a744d-kube-api-access-5v4sj\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.421502 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "164a424c-e71a-43f3-9f77-bb4fe38a744d" (UID: "164a424c-e71a-43f3-9f77-bb4fe38a744d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.515022 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164a424c-e71a-43f3-9f77-bb4fe38a744d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.700865 4739 generic.go:334] "Generic (PLEG): container finished" podID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" exitCode=0 Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.700931 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chrlt" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.700954 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93"} Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.701032 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chrlt" event={"ID":"164a424c-e71a-43f3-9f77-bb4fe38a744d","Type":"ContainerDied","Data":"ca3bb1bb9a12bbf83e9057bedfb179dab26a7c891d3e1272b80769f2b56ebde9"} Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.701078 4739 scope.go:117] "RemoveContainer" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.737434 4739 scope.go:117] "RemoveContainer" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.744100 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:07:41 crc kubenswrapper[4739]: I0218 15:07:41.754173 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chrlt"] Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.318693 4739 scope.go:117] "RemoveContainer" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.375968 4739 scope.go:117] "RemoveContainer" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" Feb 18 15:07:42 crc kubenswrapper[4739]: E0218 15:07:42.376407 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93\": container with ID starting with 7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93 not found: ID does not exist" containerID="7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376483 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93"} err="failed to get container status \"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93\": rpc error: code = NotFound desc = could not find container \"7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93\": container with ID starting with 7e4b747622714e63c9a3c7705e541015843d20ffd63829ea4ea9ee05c082fd93 not found: ID does not exist" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376520 4739 scope.go:117] "RemoveContainer" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" Feb 18 15:07:42 crc kubenswrapper[4739]: E0218 15:07:42.376850 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c\": container with ID starting with 018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c not found: ID does not exist" containerID="018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376881 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c"} err="failed to get container status \"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c\": rpc error: code = NotFound desc = could not find container \"018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c\": container with ID starting with 018923d8c5f06c74cc9913f94bffd264cb4543e7b0885a84b616a98e2b92064c not found: ID does not exist" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.376908 4739 scope.go:117] "RemoveContainer" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" Feb 18 15:07:42 crc kubenswrapper[4739]: E0218 15:07:42.377309 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9\": container with ID starting with e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9 not found: ID does not exist" containerID="e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.377354 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9"} err="failed to get container status \"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9\": rpc error: code = NotFound desc = could not find container \"e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9\": container with ID starting with e416eea9bd79b0e7244c0c9a61d33e40150ebf545e82d586d05db168b1b32dd9 not found: ID does not exist" Feb 18 15:07:42 crc kubenswrapper[4739]: I0218 15:07:42.424158 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" path="/var/lib/kubelet/pods/164a424c-e71a-43f3-9f77-bb4fe38a744d/volumes" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.884561 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:07:52 crc kubenswrapper[4739]: E0218 15:07:52.885768 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-utilities" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.885787 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-utilities" Feb 18 15:07:52 crc kubenswrapper[4739]: E0218 15:07:52.885820 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-content" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.885829 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="extract-content" Feb 18 15:07:52 crc kubenswrapper[4739]: E0218 15:07:52.885876 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.885887 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.886194 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="164a424c-e71a-43f3-9f77-bb4fe38a744d" containerName="registry-server" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.888484 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:52 crc kubenswrapper[4739]: I0218 15:07:52.911397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.014655 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.014933 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.015058 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.118145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.118215 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.118426 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.119120 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.119412 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.142901 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"community-operators-h9phw\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.216611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:07:53 crc kubenswrapper[4739]: I0218 15:07:53.844866 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:07:54 crc kubenswrapper[4739]: I0218 15:07:54.855773 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerID="f1f9b9e1c54beae52f7265ee28954799144516f500fca17c90c4ff42b09460aa" exitCode=0 Feb 18 15:07:54 crc kubenswrapper[4739]: I0218 15:07:54.856081 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"f1f9b9e1c54beae52f7265ee28954799144516f500fca17c90c4ff42b09460aa"} Feb 18 15:07:54 crc kubenswrapper[4739]: I0218 15:07:54.856108 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerStarted","Data":"ccd22c0d61af81fe07eb21bd9c5f5fb8121e67c49af5486738745ffcb1a098b6"} Feb 18 15:07:55 crc kubenswrapper[4739]: I0218 15:07:55.869308 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerStarted","Data":"d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79"} Feb 18 15:07:56 crc kubenswrapper[4739]: I0218 15:07:56.880269 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerID="d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79" exitCode=0 Feb 18 15:07:56 crc kubenswrapper[4739]: I0218 15:07:56.880532 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79"} Feb 18 15:07:57 crc kubenswrapper[4739]: I0218 15:07:57.894306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerStarted","Data":"46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633"} Feb 18 15:07:57 crc kubenswrapper[4739]: I0218 15:07:57.921632 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h9phw" podStartSLOduration=3.399007793 podStartE2EDuration="5.921601946s" podCreationTimestamp="2026-02-18 15:07:52 +0000 UTC" firstStartedPulling="2026-02-18 15:07:54.858939226 +0000 UTC m=+4107.354660138" lastFinishedPulling="2026-02-18 15:07:57.381533369 +0000 UTC m=+4109.877254291" observedRunningTime="2026-02-18 15:07:57.91345814 +0000 UTC m=+4110.409179082" watchObservedRunningTime="2026-02-18 15:07:57.921601946 +0000 UTC m=+4110.417322868" Feb 18 15:08:03 crc kubenswrapper[4739]: I0218 15:08:03.217493 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:03 crc kubenswrapper[4739]: I0218 15:08:03.218060 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:03 crc kubenswrapper[4739]: I0218 15:08:03.269565 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:04 crc kubenswrapper[4739]: I0218 15:08:04.441509 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:04 crc kubenswrapper[4739]: I0218 15:08:04.499223 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:08:05 crc kubenswrapper[4739]: I0218 15:08:05.971601 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h9phw" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" containerID="cri-o://46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633" gracePeriod=2 Feb 18 15:08:06 crc kubenswrapper[4739]: E0218 15:08:06.316521 4739 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:59324->38.102.83.80:36701: write tcp 38.102.83.80:59324->38.102.83.80:36701: write: broken pipe Feb 18 15:08:06 crc kubenswrapper[4739]: I0218 15:08:06.984607 4739 generic.go:334] "Generic (PLEG): container finished" podID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerID="46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633" exitCode=0 Feb 18 15:08:06 crc kubenswrapper[4739]: I0218 15:08:06.984647 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633"} Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.452234 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.607058 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") pod \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.607390 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") pod \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.607619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") pod \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\" (UID: \"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4\") " Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.608416 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities" (OuterVolumeSpecName: "utilities") pod "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" (UID: "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.614973 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx" (OuterVolumeSpecName: "kube-api-access-24msx") pod "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" (UID: "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4"). InnerVolumeSpecName "kube-api-access-24msx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.667415 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" (UID: "2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.711021 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.711077 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24msx\" (UniqueName: \"kubernetes.io/projected/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-kube-api-access-24msx\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.711094 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.996860 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h9phw" event={"ID":"2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4","Type":"ContainerDied","Data":"ccd22c0d61af81fe07eb21bd9c5f5fb8121e67c49af5486738745ffcb1a098b6"} Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.997134 4739 scope.go:117] "RemoveContainer" containerID="46e3f4ebf7ee896208793ef8608d5a06d296d2696d1c42b8f84261514a516633" Feb 18 15:08:07 crc kubenswrapper[4739]: I0218 15:08:07.996897 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h9phw" Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.021141 4739 scope.go:117] "RemoveContainer" containerID="d3bf11dd2b7420de54ee4f11991b9952208fb948839ede51ee1c1382e5d6ea79" Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.036260 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.047310 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h9phw"] Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.059993 4739 scope.go:117] "RemoveContainer" containerID="f1f9b9e1c54beae52f7265ee28954799144516f500fca17c90c4ff42b09460aa" Feb 18 15:08:08 crc kubenswrapper[4739]: I0218 15:08:08.423484 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" path="/var/lib/kubelet/pods/2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4/volumes" Feb 18 15:08:29 crc kubenswrapper[4739]: I0218 15:08:29.372848 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:08:29 crc kubenswrapper[4739]: I0218 15:08:29.373363 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:08:59 crc kubenswrapper[4739]: I0218 15:08:59.372291 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:08:59 crc kubenswrapper[4739]: I0218 15:08:59.372873 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.536698 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:18 crc kubenswrapper[4739]: E0218 15:09:18.537892 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-utilities" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.537908 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-utilities" Feb 18 15:09:18 crc kubenswrapper[4739]: E0218 15:09:18.537918 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-content" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.537925 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="extract-content" Feb 18 15:09:18 crc kubenswrapper[4739]: E0218 15:09:18.537964 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.537971 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.538227 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3b20aa-5aee-4ff0-bc4e-b1eb26e90aa4" containerName="registry-server" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.540506 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.552537 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.640234 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.640691 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.640934 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743369 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743525 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743579 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.743978 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.744105 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.763276 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"certified-operators-85jxq\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:18 crc kubenswrapper[4739]: I0218 15:09:18.870312 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.454889 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.800632 4739 generic.go:334] "Generic (PLEG): container finished" podID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" exitCode=0 Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.800804 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee"} Feb 18 15:09:19 crc kubenswrapper[4739]: I0218 15:09:19.801006 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerStarted","Data":"dee91aaa97ae41d79d63105d8dd698fcc24ad6d895bd98e68ce5202a805eaeec"} Feb 18 15:09:21 crc kubenswrapper[4739]: I0218 15:09:21.823213 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerStarted","Data":"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1"} Feb 18 15:09:23 crc kubenswrapper[4739]: I0218 15:09:23.844935 4739 generic.go:334] "Generic (PLEG): container finished" podID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" exitCode=0 Feb 18 15:09:23 crc kubenswrapper[4739]: I0218 15:09:23.844985 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1"} Feb 18 15:09:24 crc kubenswrapper[4739]: I0218 15:09:24.862083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerStarted","Data":"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2"} Feb 18 15:09:24 crc kubenswrapper[4739]: I0218 15:09:24.887933 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-85jxq" podStartSLOduration=2.4415700830000002 podStartE2EDuration="6.887912639s" podCreationTimestamp="2026-02-18 15:09:18 +0000 UTC" firstStartedPulling="2026-02-18 15:09:19.803975244 +0000 UTC m=+4192.299696166" lastFinishedPulling="2026-02-18 15:09:24.2503178 +0000 UTC m=+4196.746038722" observedRunningTime="2026-02-18 15:09:24.883551489 +0000 UTC m=+4197.379272421" watchObservedRunningTime="2026-02-18 15:09:24.887912639 +0000 UTC m=+4197.383633561" Feb 18 15:09:28 crc kubenswrapper[4739]: I0218 15:09:28.871282 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:28 crc kubenswrapper[4739]: I0218 15:09:28.871754 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:28 crc kubenswrapper[4739]: I0218 15:09:28.927651 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.373177 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.373228 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.373298 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.374085 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.374202 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" gracePeriod=600 Feb 18 15:09:29 crc kubenswrapper[4739]: E0218 15:09:29.516277 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.921368 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" exitCode=0 Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.921418 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863"} Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.921488 4739 scope.go:117] "RemoveContainer" containerID="4ce4e7be891ec5817c67e9cef0bf1b67c39e35acec8d6701504327c87612f88b" Feb 18 15:09:29 crc kubenswrapper[4739]: I0218 15:09:29.922343 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:09:29 crc kubenswrapper[4739]: E0218 15:09:29.922709 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:09:38 crc kubenswrapper[4739]: I0218 15:09:38.930597 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:38 crc kubenswrapper[4739]: I0218 15:09:38.988493 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.031687 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-85jxq" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" containerID="cri-o://fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" gracePeriod=2 Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.588112 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.680260 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") pod \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.680359 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") pod \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.680432 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") pod \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\" (UID: \"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68\") " Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.681255 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities" (OuterVolumeSpecName: "utilities") pod "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" (UID: "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.686570 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd" (OuterVolumeSpecName: "kube-api-access-xpnnd") pod "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" (UID: "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68"). InnerVolumeSpecName "kube-api-access-xpnnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.733825 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" (UID: "e40f51e6-e20f-4cd5-b77e-e55a23ca6a68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.783841 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpnnd\" (UniqueName: \"kubernetes.io/projected/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-kube-api-access-xpnnd\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.783912 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:39 crc kubenswrapper[4739]: I0218 15:09:39.783922 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051253 4739 generic.go:334] "Generic (PLEG): container finished" podID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" exitCode=0 Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051339 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-85jxq" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051342 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2"} Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051491 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-85jxq" event={"ID":"e40f51e6-e20f-4cd5-b77e-e55a23ca6a68","Type":"ContainerDied","Data":"dee91aaa97ae41d79d63105d8dd698fcc24ad6d895bd98e68ce5202a805eaeec"} Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.051518 4739 scope.go:117] "RemoveContainer" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.099675 4739 scope.go:117] "RemoveContainer" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.102043 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.115671 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-85jxq"] Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.129713 4739 scope.go:117] "RemoveContainer" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.196341 4739 scope.go:117] "RemoveContainer" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.196757 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2\": container with ID starting with fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2 not found: ID does not exist" containerID="fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.196791 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2"} err="failed to get container status \"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2\": rpc error: code = NotFound desc = could not find container \"fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2\": container with ID starting with fc78994fca72821225b3e2b1045dc1a5c46e17019624d334c6d68eba40b9c4e2 not found: ID does not exist" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.196812 4739 scope.go:117] "RemoveContainer" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.197096 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1\": container with ID starting with d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1 not found: ID does not exist" containerID="d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.197137 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1"} err="failed to get container status \"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1\": rpc error: code = NotFound desc = could not find container \"d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1\": container with ID starting with d527b8db8e0b27e35bc55dfa7c3257938fd42c01335d770de752389bc96514d1 not found: ID does not exist" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.197168 4739 scope.go:117] "RemoveContainer" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.197615 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee\": container with ID starting with d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee not found: ID does not exist" containerID="d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.197638 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee"} err="failed to get container status \"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee\": rpc error: code = NotFound desc = could not find container \"d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee\": container with ID starting with d77bb274b0ebd18b8e9012910b5d639998421d4e40e05a929cb15ce4ac8cd3ee not found: ID does not exist" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.417274 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:09:40 crc kubenswrapper[4739]: E0218 15:09:40.417745 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:09:40 crc kubenswrapper[4739]: I0218 15:09:40.427896 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" path="/var/lib/kubelet/pods/e40f51e6-e20f-4cd5-b77e-e55a23ca6a68/volumes" Feb 18 15:09:52 crc kubenswrapper[4739]: I0218 15:09:52.412566 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:09:52 crc kubenswrapper[4739]: E0218 15:09:52.413569 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:03 crc kubenswrapper[4739]: I0218 15:10:03.410915 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:03 crc kubenswrapper[4739]: E0218 15:10:03.412024 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:17 crc kubenswrapper[4739]: I0218 15:10:17.413360 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:17 crc kubenswrapper[4739]: E0218 15:10:17.414303 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:30 crc kubenswrapper[4739]: I0218 15:10:30.411349 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:30 crc kubenswrapper[4739]: E0218 15:10:30.412275 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:41 crc kubenswrapper[4739]: I0218 15:10:41.410809 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:41 crc kubenswrapper[4739]: E0218 15:10:41.412089 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:10:55 crc kubenswrapper[4739]: I0218 15:10:55.410949 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:10:55 crc kubenswrapper[4739]: E0218 15:10:55.412014 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:07 crc kubenswrapper[4739]: I0218 15:11:07.411245 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:07 crc kubenswrapper[4739]: E0218 15:11:07.412250 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:20 crc kubenswrapper[4739]: I0218 15:11:20.411492 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:20 crc kubenswrapper[4739]: E0218 15:11:20.412610 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:34 crc kubenswrapper[4739]: I0218 15:11:34.411037 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:34 crc kubenswrapper[4739]: E0218 15:11:34.411935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:46 crc kubenswrapper[4739]: I0218 15:11:46.410727 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:46 crc kubenswrapper[4739]: E0218 15:11:46.411704 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:11:58 crc kubenswrapper[4739]: I0218 15:11:58.418277 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:11:58 crc kubenswrapper[4739]: E0218 15:11:58.419230 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:11 crc kubenswrapper[4739]: I0218 15:12:11.410866 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:11 crc kubenswrapper[4739]: E0218 15:12:11.411767 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:23 crc kubenswrapper[4739]: I0218 15:12:23.411311 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:23 crc kubenswrapper[4739]: E0218 15:12:23.413475 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:38 crc kubenswrapper[4739]: I0218 15:12:38.420307 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:38 crc kubenswrapper[4739]: E0218 15:12:38.421168 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:12:53 crc kubenswrapper[4739]: I0218 15:12:53.411165 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:12:53 crc kubenswrapper[4739]: E0218 15:12:53.412038 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:08 crc kubenswrapper[4739]: I0218 15:13:08.421187 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:08 crc kubenswrapper[4739]: E0218 15:13:08.422135 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:21 crc kubenswrapper[4739]: I0218 15:13:21.411962 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:21 crc kubenswrapper[4739]: E0218 15:13:21.413176 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:32 crc kubenswrapper[4739]: I0218 15:13:32.410942 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:32 crc kubenswrapper[4739]: E0218 15:13:32.411880 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.642272 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:13:41 crc kubenswrapper[4739]: E0218 15:13:41.644405 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-content" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.644535 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-content" Feb 18 15:13:41 crc kubenswrapper[4739]: E0218 15:13:41.644679 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.644768 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" Feb 18 15:13:41 crc kubenswrapper[4739]: E0218 15:13:41.644874 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-utilities" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.644960 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="extract-utilities" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.645317 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40f51e6-e20f-4cd5-b77e-e55a23ca6a68" containerName="registry-server" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.646532 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.653582 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.654163 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.654271 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qfs6g" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.655752 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.675579 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755089 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755159 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755220 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755403 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755427 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755486 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755532 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755592 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.755629 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.857926 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.857986 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858043 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858214 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858248 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858287 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858336 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.858360 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859142 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859236 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859562 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.859959 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.860306 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.865682 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.867926 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.872645 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.882755 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.912671 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"tempest-tests-tempest\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " pod="openstack/tempest-tests-tempest" Feb 18 15:13:41 crc kubenswrapper[4739]: I0218 15:13:41.974634 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:13:42 crc kubenswrapper[4739]: I0218 15:13:42.496276 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 15:13:43 crc kubenswrapper[4739]: I0218 15:13:43.007950 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:13:43 crc kubenswrapper[4739]: I0218 15:13:43.411289 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:43 crc kubenswrapper[4739]: E0218 15:13:43.412717 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:13:43 crc kubenswrapper[4739]: I0218 15:13:43.684768 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerStarted","Data":"49f393666c6fdee741ccda2b76d76452444d662539e8f00cf321ebbda9fd14bc"} Feb 18 15:13:56 crc kubenswrapper[4739]: I0218 15:13:56.411375 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:13:56 crc kubenswrapper[4739]: E0218 15:13:56.412117 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:14:07 crc kubenswrapper[4739]: I0218 15:14:07.411084 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:14:07 crc kubenswrapper[4739]: E0218 15:14:07.411935 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:14:18 crc kubenswrapper[4739]: I0218 15:14:18.411338 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:14:18 crc kubenswrapper[4739]: E0218 15:14:18.412255 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:14:19 crc kubenswrapper[4739]: E0218 15:14:19.462265 4739 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 18 15:14:19 crc kubenswrapper[4739]: E0218 15:14:19.466572 4739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-964bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2d70fa76-2eec-4ca5-abd7-44a082625a40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 15:14:19 crc kubenswrapper[4739]: E0218 15:14:19.467811 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" Feb 18 15:14:20 crc kubenswrapper[4739]: E0218 15:14:20.144069 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" Feb 18 15:14:33 crc kubenswrapper[4739]: I0218 15:14:33.410963 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:14:34 crc kubenswrapper[4739]: I0218 15:14:34.295221 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c"} Feb 18 15:14:34 crc kubenswrapper[4739]: I0218 15:14:34.880227 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 15:14:37 crc kubenswrapper[4739]: I0218 15:14:37.331123 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerStarted","Data":"8ce8bd03e7ae58cb2a6f6888de57ac7cc952f171cde62e5925154c461eb9d79b"} Feb 18 15:14:37 crc kubenswrapper[4739]: I0218 15:14:37.356762 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.487458236 podStartE2EDuration="57.356743168s" podCreationTimestamp="2026-02-18 15:13:40 +0000 UTC" firstStartedPulling="2026-02-18 15:13:43.007631747 +0000 UTC m=+4455.503352669" lastFinishedPulling="2026-02-18 15:14:34.876916679 +0000 UTC m=+4507.372637601" observedRunningTime="2026-02-18 15:14:37.347600688 +0000 UTC m=+4509.843321610" watchObservedRunningTime="2026-02-18 15:14:37.356743168 +0000 UTC m=+4509.852464100" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.182655 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258"] Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.186598 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.190025 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.190540 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.194982 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.195039 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.195213 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.196930 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258"] Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.296055 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.296172 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.296201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.297905 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.308297 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.321288 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"collect-profiles-29523795-c9258\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:00 crc kubenswrapper[4739]: I0218 15:15:00.525736 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.131549 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258"] Feb 18 15:15:01 crc kubenswrapper[4739]: W0218 15:15:01.138607 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20a5bbeb_3d44_4bb2_8650_b037712d0c02.slice/crio-400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe WatchSource:0}: Error finding container 400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe: Status 404 returned error can't find the container with id 400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.576816 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerStarted","Data":"e5511ad99948f34930829fc526d57e4dd5dace947682549c340920c1647859be"} Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.577169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerStarted","Data":"400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe"} Feb 18 15:15:01 crc kubenswrapper[4739]: I0218 15:15:01.596315 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" podStartSLOduration=1.596291828 podStartE2EDuration="1.596291828s" podCreationTimestamp="2026-02-18 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 15:15:01.595466817 +0000 UTC m=+4534.091187749" watchObservedRunningTime="2026-02-18 15:15:01.596291828 +0000 UTC m=+4534.092012750" Feb 18 15:15:02 crc kubenswrapper[4739]: I0218 15:15:02.592742 4739 generic.go:334] "Generic (PLEG): container finished" podID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerID="e5511ad99948f34930829fc526d57e4dd5dace947682549c340920c1647859be" exitCode=0 Feb 18 15:15:02 crc kubenswrapper[4739]: I0218 15:15:02.592791 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerDied","Data":"e5511ad99948f34930829fc526d57e4dd5dace947682549c340920c1647859be"} Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.287362 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.331258 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") pod \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.331411 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") pod \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.331645 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") pod \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\" (UID: \"20a5bbeb-3d44-4bb2-8650-b037712d0c02\") " Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.332822 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume" (OuterVolumeSpecName: "config-volume") pod "20a5bbeb-3d44-4bb2-8650-b037712d0c02" (UID: "20a5bbeb-3d44-4bb2-8650-b037712d0c02"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.339978 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g" (OuterVolumeSpecName: "kube-api-access-4br6g") pod "20a5bbeb-3d44-4bb2-8650-b037712d0c02" (UID: "20a5bbeb-3d44-4bb2-8650-b037712d0c02"). InnerVolumeSpecName "kube-api-access-4br6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.340486 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20a5bbeb-3d44-4bb2-8650-b037712d0c02" (UID: "20a5bbeb-3d44-4bb2-8650-b037712d0c02"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.434572 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a5bbeb-3d44-4bb2-8650-b037712d0c02-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.434600 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20a5bbeb-3d44-4bb2-8650-b037712d0c02-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.434610 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4br6g\" (UniqueName: \"kubernetes.io/projected/20a5bbeb-3d44-4bb2-8650-b037712d0c02-kube-api-access-4br6g\") on node \"crc\" DevicePath \"\"" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.648732 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" event={"ID":"20a5bbeb-3d44-4bb2-8650-b037712d0c02","Type":"ContainerDied","Data":"400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe"} Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.648776 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400e7ac3a1ccf2f89dd46906225638e768f68da0a64119cbf2717713b39d5efe" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.648950 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523795-c9258" Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.697216 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 15:15:04 crc kubenswrapper[4739]: I0218 15:15:04.708303 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523750-sws8j"] Feb 18 15:15:06 crc kubenswrapper[4739]: I0218 15:15:06.435880 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fcc484-b43a-4471-9ae0-a8af18a937be" path="/var/lib/kubelet/pods/87fcc484-b43a-4471-9ae0-a8af18a937be/volumes" Feb 18 15:15:38 crc kubenswrapper[4739]: I0218 15:15:38.283960 4739 scope.go:117] "RemoveContainer" containerID="9b76a0bd2d504547a365abbe6087525e7fb33e148bde30e2d85310db58fb4427" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.466516 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:16:26 crc kubenswrapper[4739]: E0218 15:16:26.473618 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerName="collect-profiles" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.473722 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerName="collect-profiles" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.476277 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a5bbeb-3d44-4bb2-8650-b037712d0c02" containerName="collect-profiles" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.491413 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.613242 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.691717 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.692024 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.692045 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.795142 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.795201 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.795623 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.797282 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.797397 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.827176 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"redhat-marketplace-k6wtr\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:26 crc kubenswrapper[4739]: I0218 15:16:26.841782 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:28 crc kubenswrapper[4739]: I0218 15:16:28.522744 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:16:28 crc kubenswrapper[4739]: W0218 15:16:28.597691 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23565011_792b_4161_97b4_45ada5703730.slice/crio-8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5 WatchSource:0}: Error finding container 8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5: Status 404 returned error can't find the container with id 8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5 Feb 18 15:16:28 crc kubenswrapper[4739]: I0218 15:16:28.620322 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerStarted","Data":"8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5"} Feb 18 15:16:29 crc kubenswrapper[4739]: I0218 15:16:29.631672 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"5a103046e32e42a528acaed6df0225c2cd7f99af2ad5a68b58e158fd745ccc3b"} Feb 18 15:16:29 crc kubenswrapper[4739]: I0218 15:16:29.632101 4739 generic.go:334] "Generic (PLEG): container finished" podID="23565011-792b-4161-97b4-45ada5703730" containerID="5a103046e32e42a528acaed6df0225c2cd7f99af2ad5a68b58e158fd745ccc3b" exitCode=0 Feb 18 15:16:31 crc kubenswrapper[4739]: I0218 15:16:31.658923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerStarted","Data":"7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f"} Feb 18 15:16:33 crc kubenswrapper[4739]: I0218 15:16:33.689865 4739 generic.go:334] "Generic (PLEG): container finished" podID="23565011-792b-4161-97b4-45ada5703730" containerID="7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f" exitCode=0 Feb 18 15:16:33 crc kubenswrapper[4739]: I0218 15:16:33.689942 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f"} Feb 18 15:16:35 crc kubenswrapper[4739]: I0218 15:16:35.713051 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerStarted","Data":"cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9"} Feb 18 15:16:35 crc kubenswrapper[4739]: I0218 15:16:35.739883 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k6wtr" podStartSLOduration=5.065571982 podStartE2EDuration="9.73588393s" podCreationTimestamp="2026-02-18 15:16:26 +0000 UTC" firstStartedPulling="2026-02-18 15:16:29.633635703 +0000 UTC m=+4622.129356625" lastFinishedPulling="2026-02-18 15:16:34.303947651 +0000 UTC m=+4626.799668573" observedRunningTime="2026-02-18 15:16:35.73313467 +0000 UTC m=+4628.228855602" watchObservedRunningTime="2026-02-18 15:16:35.73588393 +0000 UTC m=+4628.231604852" Feb 18 15:16:36 crc kubenswrapper[4739]: I0218 15:16:36.846908 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:36 crc kubenswrapper[4739]: I0218 15:16:36.847273 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:16:38 crc kubenswrapper[4739]: I0218 15:16:38.249104 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" probeResult="failure" output=< Feb 18 15:16:38 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:16:38 crc kubenswrapper[4739]: > Feb 18 15:16:47 crc kubenswrapper[4739]: I0218 15:16:47.908831 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" probeResult="failure" output=< Feb 18 15:16:47 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:16:47 crc kubenswrapper[4739]: > Feb 18 15:16:55 crc kubenswrapper[4739]: I0218 15:16:55.018735 4739 trace.go:236] Trace[572744698]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (18-Feb-2026 15:16:53.986) (total time: 1029ms): Feb 18 15:16:55 crc kubenswrapper[4739]: Trace[572744698]: [1.029446844s] [1.029446844s] END Feb 18 15:16:58 crc kubenswrapper[4739]: I0218 15:16:58.040581 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" probeResult="failure" output=< Feb 18 15:16:58 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:16:58 crc kubenswrapper[4739]: > Feb 18 15:16:59 crc kubenswrapper[4739]: I0218 15:16:59.372612 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:16:59 crc kubenswrapper[4739]: I0218 15:16:59.373914 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:06 crc kubenswrapper[4739]: I0218 15:17:06.980320 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:07 crc kubenswrapper[4739]: I0218 15:17:07.045380 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:09 crc kubenswrapper[4739]: I0218 15:17:09.186783 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:17:09 crc kubenswrapper[4739]: I0218 15:17:09.211348 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k6wtr" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" containerID="cri-o://cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9" gracePeriod=2 Feb 18 15:17:10 crc kubenswrapper[4739]: I0218 15:17:10.155647 4739 generic.go:334] "Generic (PLEG): container finished" podID="23565011-792b-4161-97b4-45ada5703730" containerID="cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9" exitCode=0 Feb 18 15:17:10 crc kubenswrapper[4739]: I0218 15:17:10.155716 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9"} Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.490025 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.622104 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") pod \"23565011-792b-4161-97b4-45ada5703730\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.622261 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") pod \"23565011-792b-4161-97b4-45ada5703730\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.622865 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") pod \"23565011-792b-4161-97b4-45ada5703730\" (UID: \"23565011-792b-4161-97b4-45ada5703730\") " Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.702231 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities" (OuterVolumeSpecName: "utilities") pod "23565011-792b-4161-97b4-45ada5703730" (UID: "23565011-792b-4161-97b4-45ada5703730"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.727860 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.755387 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7" (OuterVolumeSpecName: "kube-api-access-lmgh7") pod "23565011-792b-4161-97b4-45ada5703730" (UID: "23565011-792b-4161-97b4-45ada5703730"). InnerVolumeSpecName "kube-api-access-lmgh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.809614 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23565011-792b-4161-97b4-45ada5703730" (UID: "23565011-792b-4161-97b4-45ada5703730"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.831915 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23565011-792b-4161-97b4-45ada5703730-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:11 crc kubenswrapper[4739]: I0218 15:17:11.831953 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmgh7\" (UniqueName: \"kubernetes.io/projected/23565011-792b-4161-97b4-45ada5703730-kube-api-access-lmgh7\") on node \"crc\" DevicePath \"\"" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.216220 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6wtr" event={"ID":"23565011-792b-4161-97b4-45ada5703730","Type":"ContainerDied","Data":"8c2bff346f976da76c946bfa6111b508c271512bf1068c19960eac1592d3fae5"} Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.216357 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6wtr" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.219825 4739 scope.go:117] "RemoveContainer" containerID="cfe998818da280781f7bdc044172c538925a006161008ba32bbf943e4e57adc9" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.329010 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.346576 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6wtr"] Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.431082 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23565011-792b-4161-97b4-45ada5703730" path="/var/lib/kubelet/pods/23565011-792b-4161-97b4-45ada5703730/volumes" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.453272 4739 scope.go:117] "RemoveContainer" containerID="7cacc3d49d94cbb8aefee2bf91f554922c6da53f57dfa12101add4db6d18366f" Feb 18 15:17:12 crc kubenswrapper[4739]: I0218 15:17:12.522719 4739 scope.go:117] "RemoveContainer" containerID="5a103046e32e42a528acaed6df0225c2cd7f99af2ad5a68b58e158fd745ccc3b" Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.132921 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.138859 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.802197 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:15 crc kubenswrapper[4739]: I0218 15:17:15.819059 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.637823 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.861382 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.863433 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.861478 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:16 crc kubenswrapper[4739]: I0218 15:17:16.863776 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.138717 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142065 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142132 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142084 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.142250 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.483729 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.604605 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.604600 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.795237 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.795237 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:17 crc kubenswrapper[4739]: I0218 15:17:17.797679 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.203102 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.674639 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.675028 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.674660 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.675164 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.742497 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.742585 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.752050 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:18 crc kubenswrapper[4739]: I0218 15:17:18.752132 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:19 crc kubenswrapper[4739]: I0218 15:17:19.424993 4739 trace.go:236] Trace[1068730840]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (18-Feb-2026 15:17:16.058) (total time: 3362ms): Feb 18 15:17:19 crc kubenswrapper[4739]: Trace[1068730840]: [3.362307945s] [3.362307945s] END Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.188666 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.190069 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.190247 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.190108 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.741716 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.742255 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.782677 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.783004 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.827368 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.827758 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.827401 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.828217 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.837972 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.838055 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:20 crc kubenswrapper[4739]: I0218 15:17:20.948721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.329606 4739 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9zgsz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.329663 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.330073 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.524687 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.524768 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.525074 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.525111 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.536607 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.536687 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.664677 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686174 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686509 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686267 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:21 crc kubenswrapper[4739]: I0218 15:17:21.686806 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.076685 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.111480 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.111549 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.376679 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.376844 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.377086 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.377862 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567374 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567512 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567516 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567468 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567573 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:22 crc kubenswrapper[4739]: I0218 15:17:22.567533 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.048781 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.048957 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.049168 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.049223 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.082950 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.083010 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.082982 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.083914 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.150078 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.150163 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.433259 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.433339 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.482360 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.482461 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.597827 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.597902 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.664837 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.664776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.741146 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.741214 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.751129 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.751184 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:23 crc kubenswrapper[4739]: I0218 15:17:23.803232 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.740324 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.740662 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.751558 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:24 crc kubenswrapper[4739]: I0218 15:17:24.751630 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.793867 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.794013 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.811908 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:25 crc kubenswrapper[4739]: I0218 15:17:25.812464 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.017706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.018042 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.668840 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.668972 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.763750 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.763779 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.860923 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.861051 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.861124 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:26 crc kubenswrapper[4739]: I0218 15:17:26.861058 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.138857 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.423683 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:27 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:27 crc kubenswrapper[4739]: > Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.429769 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:27 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:27 crc kubenswrapper[4739]: > Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.795259 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:27 crc kubenswrapper[4739]: I0218 15:17:27.795287 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.253697 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.254034 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.689705 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.690046 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.689715 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.690103 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.741891 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.741990 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.751872 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:28 crc kubenswrapper[4739]: I0218 15:17:28.751968 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.143622 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.144286 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.143798 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.144794 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.373100 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.374272 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.807668 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.813304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.825257 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 18 15:17:29 crc kubenswrapper[4739]: I0218 15:17:29.832378 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-central-agent" containerID="cri-o://17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b" gracePeriod=30 Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.005362 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.005321 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.023951 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.024025 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.788778 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.788962 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870728 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870798 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870846 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.870949 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953620 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953857 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953887 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953924 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953941 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953976 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953978 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.953992 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.954001 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:30 crc kubenswrapper[4739]: I0218 15:17:30.954067 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.095839 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.095972 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.095874 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.096074 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.177691 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261750 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261828 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261895 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261942 4739 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9zgsz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.261970 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.361776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.361946 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.506718 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.507112 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.506746 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.507421 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.534609 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.534682 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.704699 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787882 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787915 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787948 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787982 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787995 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.788349 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:31 crc kubenswrapper[4739]: I0218 15:17:31.787783 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.035752 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.035783 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.118517 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.200061 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.200563 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201081 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201143 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201335 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.201385 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283708 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283785 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283829 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283867 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.283931 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.375665 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.375793 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.375964 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.376004 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525662 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525738 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525813 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:32 crc kubenswrapper[4739]: I0218 15:17:32.525838 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.048490 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.049593 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.048493 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.049726 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083258 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083434 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083605 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.083675 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": context deadline exceeded" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.150335 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.150412 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.223626 4739 trace.go:236] Trace[1608003166]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (18-Feb-2026 15:17:29.224) (total time: 3995ms): Feb 18 15:17:33 crc kubenswrapper[4739]: Trace[1608003166]: [3.995213663s] [3.995213663s] END Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.433077 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.433165 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.484859 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.484930 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.624416 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.624497 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.625001 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740624 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740655 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740689 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.740731 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.751961 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.751975 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.752035 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.752081 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.790756 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:33 crc kubenswrapper[4739]: I0218 15:17:33.790832 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.434604 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.434950 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.640590 4739 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.640912 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8cadd086-3e21-4dfc-9577-356fdcfe83c1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.657020 4739 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:34 crc kubenswrapper[4739]: I0218 15:17:34.657490 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="d13e1961-45de-4db2-a4cb-04c91c7b18ad" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.129474 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.130213 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.521726 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" podUID="d5023d08-507d-422f-b218-72057e18ef93" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.795839 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.795860 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.810778 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.810899 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.820668 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.820751 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 15:17:35 crc kubenswrapper[4739]: I0218 15:17:35.826161 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.017884 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.018237 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.624698 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.625185 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.761693 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.761809 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794729 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794763 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794789 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794850 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.794873 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.799175 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.815538 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="3e688eb1-895d-465e-b5d9-a7b7ba9f4650" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.253:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.815777 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="3e688eb1-895d-465e-b5d9-a7b7ba9f4650" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.253:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.860686 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.863559 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.863602 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.860831 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.863949 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.864036 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.864811 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 18 15:17:36 crc kubenswrapper[4739]: I0218 15:17:36.864849 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" containerID="cri-o://426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c" gracePeriod=30 Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.305812 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.305936 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.305976 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-w8l6z" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.308744 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.308768 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.308784 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.315484 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378"} pod="metallb-system/frr-k8s-w8l6z" containerMessage="Container frr failed liveness probe, will be restarted" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.315615 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" containerID="cri-o://4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378" gracePeriod=2 Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.483883 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.664051 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.664173 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.715141 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:37 crc kubenswrapper[4739]: I0218 15:17:37.794108 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142257 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142622 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142340 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.142727 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.252635 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.252660 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.252792 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gqkq" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.649856 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.650245 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.650317 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.651496 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef"} pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.651546 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" containerID="cri-o://714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef" gracePeriod=30 Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.685079 4739 generic.go:334] "Generic (PLEG): container finished" podID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerID="4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378" exitCode=143 Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.685173 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerDied","Data":"4b1aee6726e01b4f3e809ead95869c18e7f0932b5c6c23caf9d58537654c4378"} Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.688608 4739 generic.go:334] "Generic (PLEG): container finished" podID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerID="17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b" exitCode=0 Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.688652 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerDied","Data":"17c3780ab8ac0d7b8c9a7b14ec263189c1e018fcb68ef427cecb539c67cd078b"} Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.691658 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.691721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.691808 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.740918 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.741014 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.741354 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": context deadline exceeded" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.741410 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": context deadline exceeded" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751235 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751298 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751382 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.751402 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790804 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790902 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790958 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.790947 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:38 crc kubenswrapper[4739]: I0218 15:17:38.799216 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.294534 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.414221 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.416429 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.416678 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.509584 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-v6sbz" podUID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.518069 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-v6sbz" podUID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.619194 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.623278 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:39 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:39 crc kubenswrapper[4739]: > Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.718717 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-w8l6z" event={"ID":"8ee20c2c-abb7-44a8-a5f9-8cacfce6f781","Type":"ContainerStarted","Data":"c7a405ca20cfc4b7316f76c9d44bf6f7d68548abd23ace50bc9925377095b1b4"} Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960395 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960474 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960489 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:39 crc kubenswrapper[4739]: I0218 15:17:39.960572 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.107758 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:40 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:40 crc kubenswrapper[4739]: > Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.551611 4739 trace.go:236] Trace[91807219]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (18-Feb-2026 15:17:34.927) (total time: 5619ms): Feb 18 15:17:40 crc kubenswrapper[4739]: Trace[91807219]: [5.61925257s] [5.61925257s] END Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.749711 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.749739 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.758968 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b","Type":"ContainerStarted","Data":"3b88ad6a451cb11031d26153d44ccaf6530ebcbeea5a0eee1ba554d1ea07e86c"} Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.762602 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerDied","Data":"426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c"} Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.767220 4739 generic.go:334] "Generic (PLEG): container finished" podID="26e9543b-d10d-461c-8751-99e53b680e1c" containerID="426a0d24cd8b8e5f72676298bc58b2a8e065bf98107a8c456aff7e5de045c61c" exitCode=0 Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.805036 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.819569 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.819728 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.823066 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831716 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831787 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831873 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831716 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831745 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.831988 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.832111 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.832657 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872716 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872774 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872797 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.872893 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873179 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873250 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873285 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873356 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873276 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.873555 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.876942 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729"} pod="openshift-console-operator/console-operator-58897d9998-fqdjl" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.877040 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" containerID="cri-o://2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729" gracePeriod=30 Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.913827 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.913901 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.914003 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.996721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.997219 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:40 crc kubenswrapper[4739]: I0218 15:17:40.997900 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096285 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096350 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096389 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.096424 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.104082 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-w8l6z" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.137882 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8vh65" podUID="92f1b9c3-1bdd-48ca-9a76-68ace2635cf1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178769 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178833 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178859 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.178930 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303789 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303877 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303939 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303889 4739 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-9zgsz container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303994 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.304019 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.303908 4739 patch_prober.go:28] interesting pod/downloads-7954f5f757-rtb8n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.317430 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rtb8n" podUID="c8e8ae74-3ef7-42df-99f2-1f67c11edf6d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.317532 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916"} pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.317621 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" podUID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerName="authentication-operator" containerID="cri-o://f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916" gracePeriod=30 Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.345061 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.345196 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.386772 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-prt26" podUID="209f2e6c-29e9-444b-b14a-10eadb782a59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469672 4739 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c7d667b45-kx8bw container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469730 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podUID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469791 4739 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c7d667b45-kx8bw container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.469813 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podUID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.535128 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.535438 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.535675 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.704691 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.704788 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.705050 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745758 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745831 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745830 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-4lkbs" podUID="8336a5f7-2ff0-440a-88b0-a6ab51692965" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745892 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745834 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.745989 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.746027 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.748074 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.748143 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" containerID="cri-o://d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a" gracePeriod=30 Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.849113 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5"} pod="openshift-ingress/router-default-5444994796-5cdhr" containerMessage="Container router failed liveness probe, will be restarted" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.849201 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" containerID="cri-o://3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5" gracePeriod=10 Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874594 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874606 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874735 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.874671 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.919985 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.920291 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:41 crc kubenswrapper[4739]: I0218 15:17:41.993800 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.043813 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-s7fsm" podUID="ac911184-3930-4f7e-9d77-2cc9e7262ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.085351 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.126736 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.168635 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.168703 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.168777 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.169052 4739 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.170244 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8"} pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" containerMessage="Container metrics-server failed liveness probe, will be restarted" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.170303 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" containerID="cri-o://3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8" gracePeriod=170 Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.376744 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.376816 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.376903 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.377021 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.377066 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.377136 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.378496 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9"} pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" containerMessage="Container operator failed liveness probe, will be restarted" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.378559 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" containerID="cri-o://f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9" gracePeriod=30 Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.417642 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.563850 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564076 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564122 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564148 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564162 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.563859 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564221 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564238 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564289 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.564433 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.622805 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.746882 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.746922 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.747868 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.875283 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" event={"ID":"26e9543b-d10d-461c-8751-99e53b680e1c","Type":"ContainerStarted","Data":"a0176d656322c79a20a90d02d4d53a024199d46465995e35aeee88d383e2c911"} Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.875351 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.916698 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.917162 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.921560 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:42 crc kubenswrapper[4739]: I0218 15:17:42.921631 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049037 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049046 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049539 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.049611 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.050113 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.051061 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e"} pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.051103 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" containerID="cri-o://56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e" gracePeriod=30 Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083522 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083579 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083594 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083653 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.083645 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.085210 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7"} pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" containerMessage="Container controller-manager failed liveness probe, will be restarted" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.085254 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" containerID="cri-o://54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7" gracePeriod=30 Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.151100 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.151161 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.151292 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.420662 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.420747 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.433569 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.433653 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.433746 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.482701 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.483015 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.483099 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.564746 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.564826 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.700708 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701003 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701017 4739 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-lpf5k container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701048 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" podUID="2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.7:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701162 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701374 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.700708 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.701506 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.745056 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.745148 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.752632 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.752709 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.791434 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.791543 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.795701 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-xwm5v" podUID="547a8c99-05a3-45bf-9e45-785d6cdb8fb5" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.951188 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:43 crc kubenswrapper[4739]: I0218 15:17:43.952513 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142600 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142623 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142668 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142720 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.142676 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.143167 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.146563 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.146624 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" containerID="cri-o://9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b" gracePeriod=30 Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.152187 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.152356 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.435042 4739 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.435107 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="bfabc0be-78aa-4cf2-ae16-6d226b95be03" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.483634 4739 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-wtz97 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.483745 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" podUID="ff0bf868-48fc-48a7-845d-3286c1dd16f0" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.85:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.636584 4739 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.636644 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="8cadd086-3e21-4dfc-9577-356fdcfe83c1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.657145 4739 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.657232 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="d13e1961-45de-4db2-a4cb-04c91c7b18ad" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.702733 4739 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-grbnx container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.702814 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" podUID="f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.743638 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" podUID="8add2ed9-6416-4e9f-a3a1-f8a615962850" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.951628 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:44 crc kubenswrapper[4739]: I0218 15:17:44.953259 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.128536 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.128601 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.522653 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" podUID="d5023d08-507d-422f-b218-72057e18ef93" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796721 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796765 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" podUID="7e037260-564c-4a0e-bfd4-f5452ccd7e5b" containerName="sbdb" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796721 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.796822 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-njz85" podUID="7e037260-564c-4a0e-bfd4-f5452ccd7e5b" containerName="nbdb" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.797078 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.810310 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.810487 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.810689 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860594 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860647 4739 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-kjphg container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860674 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.860691 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" podUID="26e9543b-d10d-461c-8751-99e53b680e1c" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.987703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerDied","Data":"f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9"} Feb 18 15:17:45 crc kubenswrapper[4739]: I0218 15:17:45.989247 4739 generic.go:334] "Generic (PLEG): container finished" podID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerID="f4ddca9038d3bd4756dcc8087b9a9bb925c7b018b9bc46301518d2782cc7fee9" exitCode=0 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.007265 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-fqdjl_07036c39-40f5-4969-afd0-1003c1eae037/console-operator/0.log" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.007379 4739 generic.go:334] "Generic (PLEG): container finished" podID="07036c39-40f5-4969-afd0-1003c1eae037" containerID="2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729" exitCode=1 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.007427 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerDied","Data":"2e24119667eedf40b82477d0bd3173e3790841c18a675752032ca58080019729"} Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.017674 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.017766 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.018306 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.90:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.018463 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.023810 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843"} pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" containerMessage="Container webhook-server failed liveness probe, will be restarted" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.024348 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" podUID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerName="webhook-server" containerID="cri-o://51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843" gracePeriod=2 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.667687 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.667733 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" podUID="b1d0315e-6ccb-4c6a-a488-98454bb41358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.760806 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.760952 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.761605 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.761696 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.762797 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9"} pod="metallb-system/controller-69bbfbf88f-tr2nx" containerMessage="Container controller failed liveness probe, will be restarted" Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.762893 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" containerID="cri-o://de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9" gracePeriod=2 Feb 18 15:17:46 crc kubenswrapper[4739]: I0218 15:17:46.816389 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="3e688eb1-895d-465e-b5d9-a7b7ba9f4650" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.253:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.206677 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.329607 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5864f6ff6b-7n5hc" podUID="8bf4ed0a-8055-462b-9324-1fa1c4f429b1" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.329825 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371620 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371669 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371749 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371782 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-tr2nx" podUID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": EOF" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371827 4739 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.371873 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q8h4v" podUID="bf495248-0dde-4619-bce7-2cbbda1fd646" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437526 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437577 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437799 4739 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.437870 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.524693 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.524640 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" podUID="52927612-b074-4573-aa63-41cbb1d704bf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:47 crc kubenswrapper[4739]: I0218 15:17:47.794888 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" probeResult="failure" output="command timed out" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.046260 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-fqdjl_07036c39-40f5-4969-afd0-1003c1eae037/console-operator/0.log" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.046656 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" event={"ID":"07036c39-40f5-4969-afd0-1003c1eae037","Type":"ContainerStarted","Data":"448c4bf1c14bd59255ae71526fdd326b53d90eea5f151e24381bbae63e4aa0c2"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.046909 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.047287 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.047333 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.049874 4739 generic.go:334] "Generic (PLEG): container finished" podID="0183ebc4-768c-4e08-8f1c-059fff8ba4e3" containerID="51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843" exitCode=0 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.049948 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerDied","Data":"51d685075d5784c3ee8f2b4aece9414104ea75b1f0e897b19ab1e41648c0b843"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.053408 4739 generic.go:334] "Generic (PLEG): container finished" podID="7bcf09d7-a0a6-4225-a222-1c05f51e5f7d" containerID="de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9" exitCode=0 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.053509 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerDied","Data":"de2ce2c2e7e8920c945292e32d288535f4d829f8fe7efd2af53224c6a19bfdd9"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.055703 4739 generic.go:334] "Generic (PLEG): container finished" podID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerID="d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a" exitCode=0 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.055742 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerDied","Data":"d22e2a825118fd5fe2867dcdb8fdfcade6e169eb808d0666acc156a1903a123a"} Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.252785 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.252795 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.252894 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-8gqkq" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.254012 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0"} pod="metallb-system/speaker-8gqkq" containerMessage="Container speaker failed liveness probe, will be restarted" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.254108 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" containerID="cri-o://e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0" gracePeriod=2 Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.607426 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745749 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745820 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745821 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-nd7jd container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.745891 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-nd7jd" podUID="717b73b9-8190-41ce-8513-eb314a37cdfd" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752301 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752363 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752301 4739 patch_prober.go:28] interesting pod/logging-loki-gateway-5f9bf547f9-whgjq container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.752825 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5f9bf547f9-whgjq" podUID="82d2d64c-4971-48ee-a75c-30adadf054de" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.792631 4739 patch_prober.go:28] interesting pod/thanos-querier-6d644458fc-hpxhn container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.792702 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-6d644458fc-hpxhn" podUID="cd8f90ea-5539-40b0-ba4b-8b4465eae2dd" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.74:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:48 crc kubenswrapper[4739]: I0218 15:17:48.822209 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c16940-f153-4d15-891d-b0b91e9bce5a" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.165:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086015 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n78q8 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086050 4739 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n78q8 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086092 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podUID="86f15b94-810d-4448-a663-fd8862f0e601" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.086117 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-n78q8" podUID="86f15b94-810d-4448-a663-fd8862f0e601" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.116252 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" event={"ID":"d27c3dde-4f78-49ec-8cc2-39c588d91f56","Type":"ContainerStarted","Data":"fcb7a2a732a4e62cfe6cc4e0b6ca5e900a768e98885a703266d0a7cb837318fb"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.117245 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.117511 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.117744 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.139468 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" event={"ID":"0348c042-11c0-4a27-a8d4-04beea8e11a3","Type":"ContainerStarted","Data":"027fa5b895c9d2041710b3aecf7247baa77cc62da23c5b99c574829a89498229"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.140176 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.140233 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.142261 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.142306 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.153699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" event={"ID":"0183ebc4-768c-4e08-8f1c-059fff8ba4e3","Type":"ContainerStarted","Data":"c897f89bd17bf83567088a7d419fd6c874771118a180dbd13b1b768c5af07ce3"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.155352 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.168923 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-tr2nx" event={"ID":"7bcf09d7-a0a6-4225-a222-1c05f51e5f7d","Type":"ContainerStarted","Data":"e79657c8c634b205d921089dfeb80a880b25482cb3abfbc711a1d89f86580bf9"} Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.168975 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.172751 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.172819 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.208510 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 15:17:49 crc kubenswrapper[4739]: E0218 15:17:49.257646 4739 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T15:17:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.294949 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-8gqkq" podUID="65fdc711-6806-433f-9f62-a09e816c6acf" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.806408 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.807044 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.808073 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.808197 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.820726 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea"} pod="openstack-operators/openstack-operator-index-cnhvq" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.820794 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" containerID="cri-o://f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" gracePeriod=30 Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825131 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825169 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825216 4739 patch_prober.go:28] interesting pod/console-operator-58897d9998-fqdjl container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.825262 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" podUID="07036c39-40f5-4969-afd0-1003c1eae037" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.926803 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.926877 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.960866 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.960935 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961020 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961093 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961161 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.961232 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:17:49 crc kubenswrapper[4739]: I0218 15:17:49.962388 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4"} pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.181251 4739 generic.go:334] "Generic (PLEG): container finished" podID="65fdc711-6806-433f-9f62-a09e816c6acf" containerID="e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0" exitCode=0 Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.181587 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerDied","Data":"e0f5239ecd0d03308f1e80f91a9ed7eb0f584e8c0d82253a4f43fe0ea69f33e0"} Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.182939 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183035 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183079 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183328 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.183386 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.324710 4739 trace.go:236] Trace[949522613]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (18-Feb-2026 15:17:46.654) (total time: 3666ms): Feb 18 15:17:50 crc kubenswrapper[4739]: Trace[949522613]: [3.666445139s] [3.666445139s] END Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.324711 4739 trace.go:236] Trace[1410388390]: "Calculate volume metrics of storage for pod minio-dev/minio" (18-Feb-2026 15:17:47.993) (total time: 2326ms): Feb 18 15:17:50 crc kubenswrapper[4739]: Trace[1410388390]: [2.326921222s] [2.326921222s] END Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.324789 4739 trace.go:236] Trace[1685105932]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (18-Feb-2026 15:17:44.729) (total time: 5591ms): Feb 18 15:17:50 crc kubenswrapper[4739]: Trace[1685105932]: [5.591545492s] [5.591545492s] END Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.622851 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": dial tcp 10.217.0.115:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.623048 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": dial tcp 10.217.0.115:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.623173 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.623624 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" podUID="d34f7233-92b8-4803-ab81-0da45a4de925" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": dial tcp 10.217.0.115:8081: connect: connection refused" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.793211 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.794548 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.795818 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="22142e4b-3aae-4317-a2e5-2ad225fb7473" containerName="prometheus" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.796014 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.798395 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-p4z7n" podUID="0cc54472-7fa4-457e-a332-420ce4a7da93" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.874702 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.874713 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916671 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916695 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-knpz9" podUID="61bc4b17-baf6-435c-9280-b97fcede913c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916746 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916829 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916853 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916892 4739 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-f4xd7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916910 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-f4xd7" podUID="9c1d88a8-7aa9-413f-81cc-5a4852b2691b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.916947 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.917024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.999590 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" podUID="d617f67f-2577-418f-a367-42c366c17980" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:50 crc kubenswrapper[4739]: I0218 15:17:50.999666 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:50.999723 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.018850 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" podUID="c8f419fe-23b1-4a93-97fe-05071df32425" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.018921 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.018998 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019158 4739 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019190 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019233 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": EOF" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.019252 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": EOF" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102584 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102638 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102706 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102839 4739 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-k8g5m container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.102859 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" podUID="d27c3dde-4f78-49ec-8cc2-39c588d91f56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.22:5443/healthz\": dial tcp 10.217.0.22:5443: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.185437 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" podUID="19470a60-c796-4a28-a0e2-65b50fa94ea6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.192425 4739 generic.go:334] "Generic (PLEG): container finished" podID="d34f7233-92b8-4803-ab81-0da45a4de925" containerID="056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b" exitCode=1 Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.192549 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerDied","Data":"056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b"} Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.196462 4739 scope.go:117] "RemoveContainer" containerID="056e9102a7f1a0d4fcedd4064bb1d26c99b0d9df59bf742820c56be6d652517b" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.226857 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.226942 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.227430 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-xhkdh" podUID="877f7fe3-168f-4b05-a88e-a7a11bf45e36" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.227815 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.227981 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-m469j" podUID="60bad312-a989-43d1-87e6-6c6f10d1ae8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.228226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hxdbh" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269776 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269819 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269885 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269910 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269946 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.269949 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270006 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270088 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270872 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.270916 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" containerID="cri-o://71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478" gracePeriod=30 Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.311990 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" podUID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.312412 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-q4vb2" podUID="2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.394647 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.395016 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.395075 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.395202 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rk7x9" podUID="40be8fff-51f0-467a-aca5-517e02eea23b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436788 4739 patch_prober.go:28] interesting pod/loki-operator-controller-manager-7c7d667b45-kx8bw container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436898 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" podUID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436993 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-cdt9l" podUID="3b114d0a-837c-4f0c-b02a-db694bdab362" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.436994 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437078 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437203 4739 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-mqkqw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437261 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" podUID="0348c042-11c0-4a27-a8d4-04beea8e11a3" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.27:8081/healthz\": dial tcp 10.217.0.27:8081: connect: connection refused" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.437932 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-47445" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521651 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521731 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521920 4739 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-qfljx container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.521942 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qfljx" podUID="34b1ff51-e9c9-4c9e-a83d-bae8f7cf98ac" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.522076 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.522597 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.535143 4739 patch_prober.go:28] interesting pod/console-b9f98d489-4zk5t container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.535194 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b9f98d489-4zk5t" podUID="39496c01-fddc-4d5c-8c1a-32af402a87cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.704688 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.704739 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" podUID="e19083b1-791a-4549-b64e-0bb0032abad2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.711246 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-lpf5k" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.793486 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.793486 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fmqk2" podUID="f143bfcf-f351-4ede-ab73-311c97dcb20d" containerName="registry-server" probeResult="failure" output="command timed out" Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.820099 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-v6sbz" podUID="c0ff243b-1f5d-4ab1-af8c-38a98b870d27" containerName="registry-server" probeResult="failure" output=< Feb 18 15:17:51 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:17:51 crc kubenswrapper[4739]: > Feb 18 15:17:51 crc kubenswrapper[4739]: I0218 15:17:51.913691 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-b9hds" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.033700 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.034188 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" podUID="538f0d59-9eea-4f76-a310-f7f724593a1e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.034274 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.121286 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.207094 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-kssdd" podUID="caed7b7d-66db-4bd9-ba33-efc5f3951069" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.207109 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.233721 4739 generic.go:334] "Generic (PLEG): container finished" podID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerID="71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478" exitCode=0 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.233788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerDied","Data":"71cd9ce0ab26ac5d77f5f24bda6ba500e6e908373465984fe7265b695d172478"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.240365 4739 generic.go:334] "Generic (PLEG): container finished" podID="52927612-b074-4573-aa63-41cbb1d704bf" containerID="d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a" exitCode=1 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.240408 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerDied","Data":"d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.241208 4739 scope.go:117] "RemoveContainer" containerID="d3e8ca41d583375bdc3898cd694974bbd81d5102bd70a0f141e5a482d3d4a18a" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.245799 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" event={"ID":"d34f7233-92b8-4803-ab81-0da45a4de925","Type":"ContainerStarted","Data":"61f0f91a573ef08cafaed2521fe7636c043699e262f99e85e72b59d65a49984d"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.246321 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250523 4739 patch_prober.go:28] interesting pod/metrics-server-f5c56b6cc-ft74f container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250824 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" podUID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.76:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250532 4739 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-w8l6z" podUID="8ee20c2c-abb7-44a8-a5f9-8cacfce6f781" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250683 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.250865 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251119 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251142 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251177 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" podUID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251226 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.251354 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259821 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259869 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd" exitCode=1 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.259970 4739 scope.go:117] "RemoveContainer" containerID="158b2bbe96d182b95ae80a5d9815cb703773b2c176be6c9f1ae7ad4114f0f366" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.260637 4739 scope.go:117] "RemoveContainer" containerID="610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.273654 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-5cdhr_b6cef9b9-56ee-4d0a-8c13-651e3f649a0e/router/0.log" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.273726 4739 generic.go:334] "Generic (PLEG): container finished" podID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerID="3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5" exitCode=137 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.274019 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerDied","Data":"3a9511a2775b08e37ccce91ae91ba1e1e8cf796f076f0c19d9ce73a8baf793c5"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.281060 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8gqkq" event={"ID":"65fdc711-6806-433f-9f62-a09e816c6acf","Type":"ContainerStarted","Data":"440e6130bf39495a0a02b8d3fb998fc9dd6f7539606395b70c0d7a272ff71405"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.281328 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8gqkq" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.286866 4739 generic.go:334] "Generic (PLEG): container finished" podID="fb608395-17b5-4b92-a0be-b5abc08ac979" containerID="a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7" exitCode=1 Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.286908 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerDied","Data":"a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7"} Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.287697 4739 scope.go:117] "RemoveContainer" containerID="a085a0d30a2debdcfa4545d3ddb90ae303e71e3d6d75309c439d719f629caed7" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.316016 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6956d67c5c-52bt7" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.406554 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.455277 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.505900 4739 patch_prober.go:28] interesting pod/monitoring-plugin-58bc79f98c-nzqw5 container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.506431 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" podUID="34c89fd8-2d23-4587-a802-4c07ad76bcd7" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.77:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.580588 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.621266 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-927qr" podUID="c9731232-5945-414d-bf7c-cd9207130675" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.39:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.705763 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-wtz97" Feb 18 15:17:52 crc kubenswrapper[4739]: I0218 15:17:52.806573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7954588dd9-trg52" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.150503 4739 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-68g9x container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.150865 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" podUID="d2537052-1467-4892-afe4-cafbbdfbd645" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.309607 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" event={"ID":"db4aad67-0ef8-474a-9e92-143738aed5b6","Type":"ContainerStarted","Data":"03b57e74e2832a74da57d2fde6055e5aeb34fdccec0ed1be93f8003848aff1f5"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.311183 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.311286 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.311330 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.323429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" event={"ID":"52927612-b074-4573-aa63-41cbb1d704bf","Type":"ContainerStarted","Data":"488b2ecf524bc1de7290bcb09e9216f76ae0e392d0c99030d98f7041ceab1a52"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.324600 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.338622 4739 generic.go:334] "Generic (PLEG): container finished" podID="07815587-810f-4c17-a671-8c613b3755d6" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" exitCode=0 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.338703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerDied","Data":"f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.349089 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-5cdhr_b6cef9b9-56ee-4d0a-8c13-651e3f649a0e/router/0.log" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.349402 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5cdhr" event={"ID":"b6cef9b9-56ee-4d0a-8c13-651e3f649a0e","Type":"ContainerStarted","Data":"2f0befe19ae7e085bfe950f663a17fd08137434c2f62964664fd6ccfa5efae50"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.358265 4739 generic.go:334] "Generic (PLEG): container finished" podID="4091e4df-be25-4e94-bf12-7079a8ce9b5f" containerID="668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad" exitCode=1 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.358357 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerDied","Data":"668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.361106 4739 scope.go:117] "RemoveContainer" containerID="668e5cf344ed8d06e64315007bd574671cf8c8e1f1fd333153fe7325adbbecad" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.368161 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" event={"ID":"fb608395-17b5-4b92-a0be-b5abc08ac979","Type":"ContainerStarted","Data":"c0bd26b1eb604066c51f89590061ee9e97354fd35980d19ee79cbb3136a5cdf9"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.368294 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.373920 4739 generic.go:334] "Generic (PLEG): container finished" podID="d5023d08-507d-422f-b218-72057e18ef93" containerID="f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2" exitCode=1 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.373982 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerDied","Data":"f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.375088 4739 scope.go:117] "RemoveContainer" containerID="f464ee1c513741325a02b0bed74b4d6dad23cf297d2147cca8e5c0c204eafec2" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.381513 4739 generic.go:334] "Generic (PLEG): container finished" podID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerID="54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7" exitCode=0 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.381568 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerDied","Data":"54d7a8890659b3c46b4640bcb52cc98af7b156c2ab3e4bf6fa198003af572ff7"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.388168 4739 generic.go:334] "Generic (PLEG): container finished" podID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerID="9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b" exitCode=0 Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.388349 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerDied","Data":"9544046d49726b08bf59463c644ffe22c27473e133ce5760004a0699f322d56b"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.394915 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.398653 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb"} Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.434530 4739 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-ccsmg container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.434590 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" podUID="3886312a-0449-43cc-b914-a4633b2c7e80" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.466174 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-grbnx" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.700916 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.702714 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.702768 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:53 crc kubenswrapper[4739]: I0218 15:17:53.899046 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" containerID="cri-o://9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" gracePeriod=13 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.054183 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" containerID="cri-o://fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" gracePeriod=12 Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.119764 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.121795 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.125996 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" containerID="f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.126088 4739 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f07097d931a10c25326e8aae468135c1bed2cc69762228b9f767f8fec46b12ea is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-cnhvq" podUID="07815587-810f-4c17-a671-8c613b3755d6" containerName="registry-server" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.494109 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.495294 4739 generic.go:334] "Generic (PLEG): container finished" podID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerID="56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e" exitCode=0 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.495406 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerDied","Data":"56a1307aaf68651b341dd9b1e7344cad7501683c6ef6d4563093ee7194ac943e"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.495432 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" event={"ID":"8166ccce-dd66-40c5-aed1-8f560c573a6e","Type":"ContainerStarted","Data":"7c10b9e576e31dbde16f6b2eb7e02d83eca868dcd2ec43c014f384aeb777572b"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.497615 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.504941 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.521223 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.536510 4739 generic.go:334] "Generic (PLEG): container finished" podID="fb09df70-be06-48b6-a41d-16fb110b7c55" containerID="f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916" exitCode=0 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.536527 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerDied","Data":"f4b0d8e8e140fb6de11974026f9767ddfdf44ffbc0d5f61b072eb7c7dcd22916"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.555048 4739 generic.go:334] "Generic (PLEG): container finished" podID="6741b4b4-1817-4639-bdf6-b5be2729a1fa" containerID="0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c" exitCode=1 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.555425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerDied","Data":"0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.556203 4739 scope.go:117] "RemoveContainer" containerID="0e3ddc635df525ddd18d3680b1b38102b9456254f940ba8fc0e4a8a2ed29bc7c" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.569429 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" event={"ID":"d5023d08-507d-422f-b218-72057e18ef93","Type":"ContainerStarted","Data":"5fc6aa4b3588196d6933d4bba39468b97269f5d68cc2cb1575e3abf3537fa7f5"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.572105 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.585299 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cnhvq" event={"ID":"07815587-810f-4c17-a671-8c613b3755d6","Type":"ContainerStarted","Data":"64f53dfe7f249fc8322fc491805a4d05a0c7aa19f694870fac378263c9063db2"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.601492 4739 generic.go:334] "Generic (PLEG): container finished" podID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerID="714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef" exitCode=0 Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.601864 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerDied","Data":"714b0e311cf9c7f19440fbee07a029c180a9456bf6cca7b41a364e0fdd30c2ef"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.601976 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" event={"ID":"0dc6acff-649a-4e95-ba42-ad79dae4a787","Type":"ContainerStarted","Data":"94cb00501c3a4d5e6ef68c5c3c525d7e53ae8dde475b2057415555bb90e3594a"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.609487 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.609822 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.609988 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.619109 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.643539 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" event={"ID":"4091e4df-be25-4e94-bf12-7079a8ce9b5f","Type":"ContainerStarted","Data":"b6c27ac8b74af2cdee13930c50556fa3bb6aee4a701357ae58cc42f5641ac48e"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.645141 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.645284 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.651583 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:54 crc kubenswrapper[4739]: E0218 15:17:54.651722 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="acc9bbc5-8705-410b-977b-ca9245834e36" containerName="galera" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.673306 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" event={"ID":"0480fc06-58bc-47d0-9446-8eb7ecad6509","Type":"ContainerStarted","Data":"f79789dc96cf5f56387b2e936fca0f9a26d35a541872d8c725e60632ac6f0364"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.674576 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.674658 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.674690 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.682552 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" event={"ID":"6a73ee03-bb76-478c-bcd1-2d08f0e6f538","Type":"ContainerStarted","Data":"229da6e40d3834292e453f286cca0fae54c4832c5c40a0aaf2e0d0615a7a5a0d"} Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.682983 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.683025 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.683152 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.683238 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.702831 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:54 crc kubenswrapper[4739]: I0218 15:17:54.702891 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.142573 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.593010 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-54k4b" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.703847 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.704091 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.716756 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9zgsz" event={"ID":"fb09df70-be06-48b6-a41d-16fb110b7c55","Type":"ContainerStarted","Data":"1400e06ac9c3667644ccc8d255a9a7d8beb088beaa9ca0022d782476d48f59fe"} Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.730162 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" event={"ID":"6741b4b4-1817-4639-bdf6-b5be2729a1fa","Type":"ContainerStarted","Data":"7ca101f91603600ce60b4e3e60d9e95e6228058d92afadaa466ea2ea9808746e"} Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.730364 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.730413 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.731196 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733887 4739 patch_prober.go:28] interesting pod/controller-manager-7b7465fb97-9dgmn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733935 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" podUID="0480fc06-58bc-47d0-9446-8eb7ecad6509" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733898 4739 patch_prober.go:28] interesting pod/route-controller-manager-77ddcd9567-p8jx5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.733998 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" podUID="8166ccce-dd66-40c5-aed1-8f560c573a6e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.734651 4739 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-kmtx7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.734692 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" podUID="db4aad67-0ef8-474a-9e92-143738aed5b6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 15:17:55 crc kubenswrapper[4739]: I0218 15:17:55.898681 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-kjphg" Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.020605 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.024611 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.032801 4739 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 18 15:17:56 crc kubenswrapper[4739]: E0218 15:17:56.032916 4739 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="869aa11b-eba7-4598-90dc-d50c642b9120" containerName="galera" Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.616668 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-w8l6z" Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.701508 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.701565 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.739020 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:56 crc kubenswrapper[4739]: I0218 15:17:56.739069 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.514774 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.606955 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.607031 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.607067 4739 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-28vcn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.607130 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" podUID="0dc6acff-649a-4e95-ba42-ad79dae4a787" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.703782 4739 patch_prober.go:28] interesting pod/router-default-5444994796-5cdhr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 15:17:57 crc kubenswrapper[4739]: [-]has-synced failed: reason withheld Feb 18 15:17:57 crc kubenswrapper[4739]: [+]process-running ok Feb 18 15:17:57 crc kubenswrapper[4739]: healthz check failed Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.704101 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5cdhr" podUID="b6cef9b9-56ee-4d0a-8c13-651e3f649a0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.722776 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.753726 4739 generic.go:334] "Generic (PLEG): container finished" podID="869aa11b-eba7-4598-90dc-d50c642b9120" containerID="9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74" exitCode=0 Feb 18 15:17:57 crc kubenswrapper[4739]: I0218 15:17:57.753780 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerDied","Data":"9c6d0d55a895a14de60b05d9c4c4d871217aebf1c393380fdf7c5b746a8e5a74"} Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143075 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143110 4739 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6jxsc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143555 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.143471 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" podUID="6a73ee03-bb76-478c-bcd1-2d08f0e6f538" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.708165 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.709865 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.716210 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5cdhr" Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.766172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"869aa11b-eba7-4598-90dc-d50c642b9120","Type":"ContainerStarted","Data":"d07be60b5be3e4f85a67dfb8c57d155a8c34e5d0eef291f493a34dc8761e4361"} Feb 18 15:17:58 crc kubenswrapper[4739]: I0218 15:17:58.966111 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.373353 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.373741 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.373825 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.374978 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.375059 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c" gracePeriod=600 Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.803890 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c" exitCode=0 Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.803960 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c"} Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.815835 4739 scope.go:117] "RemoveContainer" containerID="eea0629bf123ae618d7c8303b0956e44ce31f0b5bd0c367b6becf6aff1312863" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.852648 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fqdjl" Feb 18 15:17:59 crc kubenswrapper[4739]: I0218 15:17:59.857463 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 15:17:59 crc kubenswrapper[4739]: E0218 15:17:59.890225 4739 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod947a1bc9_4557_4cd9_aa90_9d3893aad914.slice/crio-conmon-3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c.scope\": RecentStats: unable to find data in memory cache]" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.022294 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.097467 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-hrxn2" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.106992 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-kmtx7" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.351333 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-7c7d667b45-kx8bw" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.538886 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b9f98d489-4zk5t" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.623340 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-lmvdv" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.627024 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4f4zc" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.694324 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-k8g5m" Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.819828 4739 generic.go:334] "Generic (PLEG): container finished" podID="acc9bbc5-8705-410b-977b-ca9245834e36" containerID="fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f" exitCode=0 Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.819896 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerDied","Data":"fbee4474fb7d9fba9da96c073301f9e9551a71041a83e9f79d995e7346274e4f"} Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.819921 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"acc9bbc5-8705-410b-977b-ca9245834e36","Type":"ContainerStarted","Data":"68907d712cca3f7de51c445863d41f9dd8dfa7fa7896e8b60ec1027b8593cae6"} Feb 18 15:18:00 crc kubenswrapper[4739]: I0218 15:18:00.824333 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400"} Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.069506 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-jblfh" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.148546 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jxsc" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.295623 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-mqkqw" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.505083 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-58bc79f98c-nzqw5" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.648034 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.648413 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:18:01 crc kubenswrapper[4739]: I0218 15:18:01.648513 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.052230 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77ddcd9567-p8jx5" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.086596 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b7465fb97-9dgmn" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.155949 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-68g9x" Feb 18 15:18:02 crc kubenswrapper[4739]: I0218 15:18:02.438593 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-ccsmg" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.018563 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.018680 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.019815 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.019878 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" containerID="cri-o://c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665" gracePeriod=30 Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.228676 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:03 crc kubenswrapper[4739]: E0218 15:18:03.230748 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.230773 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" Feb 18 15:18:03 crc kubenswrapper[4739]: E0218 15:18:03.230802 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-utilities" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.230808 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-utilities" Feb 18 15:18:03 crc kubenswrapper[4739]: E0218 15:18:03.230828 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-content" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.230835 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="extract-content" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.231127 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="23565011-792b-4161-97b4-45ada5703730" containerName="registry-server" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.242616 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.316753 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.316947 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.317080 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.338258 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.419609 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.420594 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.420627 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.420602 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.421165 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.457684 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"community-operators-qkbjq\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:03 crc kubenswrapper[4739]: I0218 15:18:03.574809 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.115004 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.115378 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.283203 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.595561 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.595899 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.924420 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cnhvq" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.939712 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86f6cb9d5d-8jd6g" Feb 18 15:18:04 crc kubenswrapper[4739]: I0218 15:18:04.974014 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.683776 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-tr2nx" Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.887270 4739 generic.go:334] "Generic (PLEG): container finished" podID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" exitCode=0 Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.887372 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb"} Feb 18 15:18:05 crc kubenswrapper[4739]: I0218 15:18:05.887654 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerStarted","Data":"3c5afd23ff36fa31fef20ba87125c6becb20c52925093543710d4d4c92ef82c5"} Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.009354 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.010010 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.449889 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl" Feb 18 15:18:06 crc kubenswrapper[4739]: I0218 15:18:06.901944 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerStarted","Data":"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7"} Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.176934 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8gqkq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.203858 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.206611 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.248872 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.327656 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.327965 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.328544 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.430964 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.431137 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.431194 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.432370 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.432426 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.452259 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"redhat-operators-2kdgq\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.540151 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:07 crc kubenswrapper[4739]: I0218 15:18:07.611907 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-28vcn" Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.392295 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.951794 4739 generic.go:334] "Generic (PLEG): container finished" podID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" exitCode=0 Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.952034 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7"} Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.956599 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" exitCode=0 Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.956691 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090"} Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.956730 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerStarted","Data":"1d350450ce9c4bc4c65ff0ae502f9f800a652d5d5a2f99a2c8e967161fb37f2b"} Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.967429 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerID="c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665" exitCode=0 Feb 18 15:18:08 crc kubenswrapper[4739]: I0218 15:18:08.967508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerDied","Data":"c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665"} Feb 18 15:18:09 crc kubenswrapper[4739]: I0218 15:18:09.987594 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerStarted","Data":"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad"} Feb 18 15:18:09 crc kubenswrapper[4739]: I0218 15:18:09.995108 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerStarted","Data":"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1"} Feb 18 15:18:10 crc kubenswrapper[4739]: I0218 15:18:10.008649 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qkbjq" podStartSLOduration=3.2616291889999998 podStartE2EDuration="7.007774678s" podCreationTimestamp="2026-02-18 15:18:03 +0000 UTC" firstStartedPulling="2026-02-18 15:18:05.890281019 +0000 UTC m=+4718.386001941" lastFinishedPulling="2026-02-18 15:18:09.636426508 +0000 UTC m=+4722.132147430" observedRunningTime="2026-02-18 15:18:10.006689531 +0000 UTC m=+4722.502410473" watchObservedRunningTime="2026-02-18 15:18:10.007774678 +0000 UTC m=+4722.503495600" Feb 18 15:18:11 crc kubenswrapper[4739]: I0218 15:18:11.021770 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37"} Feb 18 15:18:11 crc kubenswrapper[4739]: I0218 15:18:11.648248 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:18:11 crc kubenswrapper[4739]: I0218 15:18:11.648496 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:18:13 crc kubenswrapper[4739]: I0218 15:18:13.575562 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:13 crc kubenswrapper[4739]: I0218 15:18:13.576200 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:13 crc kubenswrapper[4739]: I0218 15:18:13.645681 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:14 crc kubenswrapper[4739]: I0218 15:18:14.109781 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:15 crc kubenswrapper[4739]: I0218 15:18:15.196719 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:15 crc kubenswrapper[4739]: I0218 15:18:15.299271 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" containerID="cri-o://873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4" gracePeriod=15 Feb 18 15:18:15 crc kubenswrapper[4739]: I0218 15:18:15.997747 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.023839 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.075955 4739 generic.go:334] "Generic (PLEG): container finished" podID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerID="873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4" exitCode=0 Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076063 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerDied","Data":"873aca0bbc81a7124b75ae87a2863a7a8a119c825b1bc26fde747334cd6eb3e4"} Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076116 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" event={"ID":"bcd76c5a-1d18-4986-9be4-399139f65c11","Type":"ContainerStarted","Data":"5a2ae0a4472b6c53563aa19aa5c52aa81a233460c30511d55d6d99288b66a85a"} Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076185 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qkbjq" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" containerID="cri-o://8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" gracePeriod=2 Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076428 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076718 4739 patch_prober.go:28] interesting pod/oauth-openshift-798cf5fb96-6gsw8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Feb 18 15:18:16 crc kubenswrapper[4739]: I0218 15:18:16.076758 4739 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" podUID="bcd76c5a-1d18-4986-9be4-399139f65c11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.015963 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.033745 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") pod \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.033818 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") pod \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.033860 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") pod \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\" (UID: \"82060158-06b2-4cf9-9f4a-57fe3e3b9916\") " Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.035926 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities" (OuterVolumeSpecName: "utilities") pod "82060158-06b2-4cf9-9f4a-57fe3e3b9916" (UID: "82060158-06b2-4cf9-9f4a-57fe3e3b9916"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.045360 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q" (OuterVolumeSpecName: "kube-api-access-pgq8q") pod "82060158-06b2-4cf9-9f4a-57fe3e3b9916" (UID: "82060158-06b2-4cf9-9f4a-57fe3e3b9916"). InnerVolumeSpecName "kube-api-access-pgq8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.107914 4739 generic.go:334] "Generic (PLEG): container finished" podID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" exitCode=0 Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108172 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad"} Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108328 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkbjq" event={"ID":"82060158-06b2-4cf9-9f4a-57fe3e3b9916","Type":"ContainerDied","Data":"3c5afd23ff36fa31fef20ba87125c6becb20c52925093543710d4d4c92ef82c5"} Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108195 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkbjq" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.108354 4739 scope.go:117] "RemoveContainer" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.119704 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" exitCode=0 Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.120210 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1"} Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.123532 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82060158-06b2-4cf9-9f4a-57fe3e3b9916" (UID: "82060158-06b2-4cf9-9f4a-57fe3e3b9916"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.127897 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-798cf5fb96-6gsw8" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.137942 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.137971 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgq8q\" (UniqueName: \"kubernetes.io/projected/82060158-06b2-4cf9-9f4a-57fe3e3b9916-kube-api-access-pgq8q\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.137979 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82060158-06b2-4cf9-9f4a-57fe3e3b9916-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.177991 4739 scope.go:117] "RemoveContainer" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.236236 4739 scope.go:117] "RemoveContainer" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.274590 4739 scope.go:117] "RemoveContainer" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" Feb 18 15:18:17 crc kubenswrapper[4739]: E0218 15:18:17.275156 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad\": container with ID starting with 8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad not found: ID does not exist" containerID="8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275193 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad"} err="failed to get container status \"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad\": rpc error: code = NotFound desc = could not find container \"8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad\": container with ID starting with 8346df76cf5912145b3fcecee27703a515fbcff6cbb852cb803a8ed0d764c6ad not found: ID does not exist" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275219 4739 scope.go:117] "RemoveContainer" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" Feb 18 15:18:17 crc kubenswrapper[4739]: E0218 15:18:17.275585 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7\": container with ID starting with e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7 not found: ID does not exist" containerID="e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275612 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7"} err="failed to get container status \"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7\": rpc error: code = NotFound desc = could not find container \"e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7\": container with ID starting with e383a96d41a65aa8a7d26f6b4ec48763f5c5623cf31123a208a2547499a56cb7 not found: ID does not exist" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275629 4739 scope.go:117] "RemoveContainer" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" Feb 18 15:18:17 crc kubenswrapper[4739]: E0218 15:18:17.275962 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb\": container with ID starting with 7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb not found: ID does not exist" containerID="7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.275995 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb"} err="failed to get container status \"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb\": rpc error: code = NotFound desc = could not find container \"7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb\": container with ID starting with 7c285d4fd4d7c710a11bee599b9840fbe7c0de70e0a08daa7e5f1bc78b0615bb not found: ID does not exist" Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.445222 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:17 crc kubenswrapper[4739]: I0218 15:18:17.466982 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qkbjq"] Feb 18 15:18:18 crc kubenswrapper[4739]: I0218 15:18:18.136356 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerStarted","Data":"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41"} Feb 18 15:18:18 crc kubenswrapper[4739]: I0218 15:18:18.163651 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2kdgq" podStartSLOduration=2.543031862 podStartE2EDuration="11.163631291s" podCreationTimestamp="2026-02-18 15:18:07 +0000 UTC" firstStartedPulling="2026-02-18 15:18:08.959200411 +0000 UTC m=+4721.454921333" lastFinishedPulling="2026-02-18 15:18:17.57979984 +0000 UTC m=+4730.075520762" observedRunningTime="2026-02-18 15:18:18.15725082 +0000 UTC m=+4730.652971762" watchObservedRunningTime="2026-02-18 15:18:18.163631291 +0000 UTC m=+4730.659352213" Feb 18 15:18:18 crc kubenswrapper[4739]: I0218 15:18:18.426937 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" path="/var/lib/kubelet/pods/82060158-06b2-4cf9-9f4a-57fe3e3b9916/volumes" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.023008 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.649546 4739 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.649606 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.649660 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.651187 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 18 15:18:21 crc kubenswrapper[4739]: I0218 15:18:21.651345 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb" gracePeriod=30 Feb 18 15:18:24 crc kubenswrapper[4739]: I0218 15:18:24.483030 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b78699c88-r8kr2" Feb 18 15:18:26 crc kubenswrapper[4739]: I0218 15:18:26.022067 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:27 crc kubenswrapper[4739]: I0218 15:18:27.540571 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:27 crc kubenswrapper[4739]: I0218 15:18:27.540963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:18:28 crc kubenswrapper[4739]: I0218 15:18:28.623872 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:28 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:28 crc kubenswrapper[4739]: > Feb 18 15:18:31 crc kubenswrapper[4739]: I0218 15:18:31.014873 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:36 crc kubenswrapper[4739]: I0218 15:18:36.018225 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:38 crc kubenswrapper[4739]: I0218 15:18:38.595912 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:38 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:38 crc kubenswrapper[4739]: > Feb 18 15:18:41 crc kubenswrapper[4739]: I0218 15:18:41.020563 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:46 crc kubenswrapper[4739]: I0218 15:18:46.043883 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:46 crc kubenswrapper[4739]: I0218 15:18:46.470125 4739 generic.go:334] "Generic (PLEG): container finished" podID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerID="8ce8bd03e7ae58cb2a6f6888de57ac7cc952f171cde62e5925154c461eb9d79b" exitCode=1 Feb 18 15:18:46 crc kubenswrapper[4739]: I0218 15:18:46.470169 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerDied","Data":"8ce8bd03e7ae58cb2a6f6888de57ac7cc952f171cde62e5925154c461eb9d79b"} Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.033792 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202069 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202212 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202241 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202268 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202319 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202385 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202484 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202588 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.202619 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"2d70fa76-2eec-4ca5-abd7-44a082625a40\" (UID: \"2d70fa76-2eec-4ca5-abd7-44a082625a40\") " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.203282 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.203869 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data" (OuterVolumeSpecName: "config-data") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.209644 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.246150 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.290308 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.292623 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz" (OuterVolumeSpecName: "kube-api-access-964bz") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "kube-api-access-964bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305482 4739 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305526 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305538 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-964bz\" (UniqueName: \"kubernetes.io/projected/2d70fa76-2eec-4ca5-abd7-44a082625a40-kube-api-access-964bz\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305547 4739 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305556 4739 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2d70fa76-2eec-4ca5-abd7-44a082625a40-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.305582 4739 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.306182 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.333397 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.343952 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2d70fa76-2eec-4ca5-abd7-44a082625a40" (UID: "2d70fa76-2eec-4ca5-abd7-44a082625a40"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.353610 4739 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408176 4739 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408547 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408661 4739 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2d70fa76-2eec-4ca5-abd7-44a082625a40-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.408750 4739 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d70fa76-2eec-4ca5-abd7-44a082625a40-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.504584 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2d70fa76-2eec-4ca5-abd7-44a082625a40","Type":"ContainerDied","Data":"49f393666c6fdee741ccda2b76d76452444d662539e8f00cf321ebbda9fd14bc"} Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.504631 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f393666c6fdee741ccda2b76d76452444d662539e8f00cf321ebbda9fd14bc" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.504686 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 15:18:48 crc kubenswrapper[4739]: I0218 15:18:48.602270 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:48 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:48 crc kubenswrapper[4739]: > Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.766892 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768081 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768101 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768158 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768167 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768199 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-utilities" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768208 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-utilities" Feb 18 15:18:50 crc kubenswrapper[4739]: E0218 15:18:50.768224 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-content" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768231 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="extract-content" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768542 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="82060158-06b2-4cf9-9f4a-57fe3e3b9916" containerName="registry-server" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.768578 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d70fa76-2eec-4ca5-abd7-44a082625a40" containerName="tempest-tests-tempest-tests-runner" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.769593 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.774098 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qfs6g" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.779500 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.867269 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wqsk\" (UniqueName: \"kubernetes.io/projected/fafc1147-dd3a-429c-ae6f-48865401c68b-kube-api-access-9wqsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.867537 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.970282 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wqsk\" (UniqueName: \"kubernetes.io/projected/fafc1147-dd3a-429c-ae6f-48865401c68b-kube-api-access-9wqsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.970376 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.971079 4739 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:50 crc kubenswrapper[4739]: I0218 15:18:50.995350 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wqsk\" (UniqueName: \"kubernetes.io/projected/fafc1147-dd3a-429c-ae6f-48865401c68b-kube-api-access-9wqsk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.012308 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fafc1147-dd3a-429c-ae6f-48865401c68b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.018212 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.101824 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.705992 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 15:18:51 crc kubenswrapper[4739]: I0218 15:18:51.713950 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.557505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.559992 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.562631 4739 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb" exitCode=137 Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.562703 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fd2fd94b9ccaed5ed1a571fdb7afa96704ef7d65e74faab448f6123159b08bfb"} Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.562738 4739 scope.go:117] "RemoveContainer" containerID="610a047b229be1341e5743f79181f9b3692358957501791b9cc4b591a8f75fdd" Feb 18 15:18:52 crc kubenswrapper[4739]: I0218 15:18:52.565383 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fafc1147-dd3a-429c-ae6f-48865401c68b","Type":"ContainerStarted","Data":"08c8e2669c846005f6059f146d98779ae2b4e462d895c79341686389411ee000"} Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.584589 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.587541 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a81dd773adf0ddee9b34eb2f33f2c3c798fa05884811efd4b1dff5fa5252df71"} Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.589605 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fafc1147-dd3a-429c-ae6f-48865401c68b","Type":"ContainerStarted","Data":"fb3322bdc5fbf1408dfee781cff5b9ab1904ac38c88d720ff5a08732d504f1bd"} Feb 18 15:18:53 crc kubenswrapper[4739]: I0218 15:18:53.629240 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.25632063 podStartE2EDuration="3.62922077s" podCreationTimestamp="2026-02-18 15:18:50 +0000 UTC" firstStartedPulling="2026-02-18 15:18:51.713660998 +0000 UTC m=+4764.209381920" lastFinishedPulling="2026-02-18 15:18:53.086561128 +0000 UTC m=+4765.582282060" observedRunningTime="2026-02-18 15:18:53.624108291 +0000 UTC m=+4766.119829223" watchObservedRunningTime="2026-02-18 15:18:53.62922077 +0000 UTC m=+4766.124941692" Feb 18 15:18:56 crc kubenswrapper[4739]: I0218 15:18:56.022764 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:18:57 crc kubenswrapper[4739]: I0218 15:18:57.722214 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:18:58 crc kubenswrapper[4739]: I0218 15:18:58.595459 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" probeResult="failure" output=< Feb 18 15:18:58 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:18:58 crc kubenswrapper[4739]: > Feb 18 15:19:01 crc kubenswrapper[4739]: I0218 15:19:01.019726 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:19:01 crc kubenswrapper[4739]: I0218 15:19:01.648179 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:19:01 crc kubenswrapper[4739]: I0218 15:19:01.651982 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:19:06 crc kubenswrapper[4739]: I0218 15:19:06.138095 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:19:07 crc kubenswrapper[4739]: I0218 15:19:07.593143 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:07 crc kubenswrapper[4739]: I0218 15:19:07.650304 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:07 crc kubenswrapper[4739]: I0218 15:19:07.727698 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 15:19:08 crc kubenswrapper[4739]: I0218 15:19:08.425985 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.154525 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2kdgq" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" containerID="cri-o://58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" gracePeriod=2 Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.773420 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.904208 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") pod \"3a13d0fc-5518-446d-8ce5-32db175f8570\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.904376 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") pod \"3a13d0fc-5518-446d-8ce5-32db175f8570\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.904613 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") pod \"3a13d0fc-5518-446d-8ce5-32db175f8570\" (UID: \"3a13d0fc-5518-446d-8ce5-32db175f8570\") " Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.905090 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities" (OuterVolumeSpecName: "utilities") pod "3a13d0fc-5518-446d-8ce5-32db175f8570" (UID: "3a13d0fc-5518-446d-8ce5-32db175f8570"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:19:09 crc kubenswrapper[4739]: I0218 15:19:09.905461 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.033365 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a13d0fc-5518-446d-8ce5-32db175f8570" (UID: "3a13d0fc-5518-446d-8ce5-32db175f8570"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.110825 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a13d0fc-5518-446d-8ce5-32db175f8570-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170318 4739 generic.go:334] "Generic (PLEG): container finished" podID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" exitCode=0 Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170389 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdgq" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170425 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41"} Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170742 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdgq" event={"ID":"3a13d0fc-5518-446d-8ce5-32db175f8570","Type":"ContainerDied","Data":"1d350450ce9c4bc4c65ff0ae502f9f800a652d5d5a2f99a2c8e967161fb37f2b"} Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.170778 4739 scope.go:117] "RemoveContainer" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.205412 4739 scope.go:117] "RemoveContainer" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.623977 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28" (OuterVolumeSpecName: "kube-api-access-pkn28") pod "3a13d0fc-5518-446d-8ce5-32db175f8570" (UID: "3a13d0fc-5518-446d-8ce5-32db175f8570"). InnerVolumeSpecName "kube-api-access-pkn28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.633774 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkn28\" (UniqueName: \"kubernetes.io/projected/3a13d0fc-5518-446d-8ce5-32db175f8570-kube-api-access-pkn28\") on node \"crc\" DevicePath \"\"" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.686580 4739 scope.go:117] "RemoveContainer" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.774838 4739 scope.go:117] "RemoveContainer" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" Feb 18 15:19:10 crc kubenswrapper[4739]: E0218 15:19:10.775658 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41\": container with ID starting with 58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41 not found: ID does not exist" containerID="58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.775724 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41"} err="failed to get container status \"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41\": rpc error: code = NotFound desc = could not find container \"58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41\": container with ID starting with 58efc2c12364d322f45a15be19c1a60be2c5a88154c26083f7156efe4bfb4b41 not found: ID does not exist" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.775762 4739 scope.go:117] "RemoveContainer" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" Feb 18 15:19:10 crc kubenswrapper[4739]: E0218 15:19:10.776136 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1\": container with ID starting with eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1 not found: ID does not exist" containerID="eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.776182 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1"} err="failed to get container status \"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1\": rpc error: code = NotFound desc = could not find container \"eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1\": container with ID starting with eba584e2877040de12272810a04952bc93f1cca86d631336ed5c8209780856d1 not found: ID does not exist" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.776243 4739 scope.go:117] "RemoveContainer" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" Feb 18 15:19:10 crc kubenswrapper[4739]: E0218 15:19:10.776563 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090\": container with ID starting with 5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090 not found: ID does not exist" containerID="5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.776592 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090"} err="failed to get container status \"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090\": rpc error: code = NotFound desc = could not find container \"5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090\": container with ID starting with 5b1838b5e43972eec6e100240448d1039d3f943befea24d158c3472b9de83090 not found: ID does not exist" Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.863283 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:19:10 crc kubenswrapper[4739]: I0218 15:19:10.876313 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2kdgq"] Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.017310 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.017405 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.018472 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed startup probe, will be restarted" Feb 18 15:19:11 crc kubenswrapper[4739]: I0218 15:19:11.018530 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerName="cinder-scheduler" containerID="cri-o://14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37" gracePeriod=30 Feb 18 15:19:12 crc kubenswrapper[4739]: I0218 15:19:12.424288 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" path="/var/lib/kubelet/pods/3a13d0fc-5518-446d-8ce5-32db175f8570/volumes" Feb 18 15:19:13 crc kubenswrapper[4739]: I0218 15:19:13.626639 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 15:19:13 crc kubenswrapper[4739]: I0218 15:19:13.805155 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 15:19:15 crc kubenswrapper[4739]: I0218 15:19:15.034097 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 15:19:15 crc kubenswrapper[4739]: I0218 15:19:15.152864 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 15:19:41 crc kubenswrapper[4739]: I0218 15:19:41.532935 4739 generic.go:334] "Generic (PLEG): container finished" podID="ff1a7d36-7f60-40b3-82ee-2fd64f780bc4" containerID="14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37" exitCode=137 Feb 18 15:19:41 crc kubenswrapper[4739]: I0218 15:19:41.533411 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerDied","Data":"14efbd72afaf309190c1330115bb501e01a5e04256ff4703359f3eda7a513f37"} Feb 18 15:19:41 crc kubenswrapper[4739]: I0218 15:19:41.533467 4739 scope.go:117] "RemoveContainer" containerID="c05a5e51b015b62511e6919cb70699ee5ff50db494a09d669f769b7ecdd61665" Feb 18 15:19:43 crc kubenswrapper[4739]: I0218 15:19:43.642484 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff1a7d36-7f60-40b3-82ee-2fd64f780bc4","Type":"ContainerStarted","Data":"4e7eefd05554da540ad3b190cd2d33f16c7b3628d6ddec497c855a8642997bf8"} Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.490148 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:19:44 crc kubenswrapper[4739]: E0218 15:19:44.490930 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-utilities" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.490949 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-utilities" Feb 18 15:19:44 crc kubenswrapper[4739]: E0218 15:19:44.490965 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.490977 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" Feb 18 15:19:44 crc kubenswrapper[4739]: E0218 15:19:44.491032 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-content" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.491038 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="extract-content" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.491325 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a13d0fc-5518-446d-8ce5-32db175f8570" containerName="registry-server" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.493582 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.498095 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-26llf"/"default-dockercfg-lhmph" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.500564 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-26llf"/"openshift-service-ca.crt" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.511099 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-26llf"/"kube-root-ca.crt" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.524064 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.614212 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.614273 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.717105 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.717175 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.718356 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.739433 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"must-gather-vps8f\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:44 crc kubenswrapper[4739]: I0218 15:19:44.814877 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:19:45 crc kubenswrapper[4739]: I0218 15:19:45.409145 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:19:45 crc kubenswrapper[4739]: I0218 15:19:45.666074 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerStarted","Data":"7fd91882f1ac653843f4cd5b72d79b58ed24b2c8236be2c5a2b3ea911970f5fb"} Feb 18 15:19:45 crc kubenswrapper[4739]: I0218 15:19:45.996496 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 15:19:51 crc kubenswrapper[4739]: I0218 15:19:51.028963 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 15:19:53 crc kubenswrapper[4739]: I0218 15:19:53.775583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerStarted","Data":"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a"} Feb 18 15:19:54 crc kubenswrapper[4739]: I0218 15:19:54.789508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerStarted","Data":"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276"} Feb 18 15:19:54 crc kubenswrapper[4739]: I0218 15:19:54.811510 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-26llf/must-gather-vps8f" podStartSLOduration=2.697307689 podStartE2EDuration="10.8114873s" podCreationTimestamp="2026-02-18 15:19:44 +0000 UTC" firstStartedPulling="2026-02-18 15:19:45.40759195 +0000 UTC m=+4817.903312872" lastFinishedPulling="2026-02-18 15:19:53.521771561 +0000 UTC m=+4826.017492483" observedRunningTime="2026-02-18 15:19:54.803490368 +0000 UTC m=+4827.299211280" watchObservedRunningTime="2026-02-18 15:19:54.8114873 +0000 UTC m=+4827.307208222" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.649154 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/crc-debug-kp8qw"] Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.651502 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.727635 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.727721 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.829684 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.829819 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.829922 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.852098 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"crc-debug-kp8qw\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:19:59 crc kubenswrapper[4739]: I0218 15:19:59.973910 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:20:00 crc kubenswrapper[4739]: W0218 15:20:00.118948 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456c8847_a5c4_43ee_8d46_eb5bf5b8c5d6.slice/crio-b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368 WatchSource:0}: Error finding container b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368: Status 404 returned error can't find the container with id b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368 Feb 18 15:20:00 crc kubenswrapper[4739]: I0218 15:20:00.858246 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-kp8qw" event={"ID":"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6","Type":"ContainerStarted","Data":"b4f150046f92991647c6405bea63cf910e4a9129211001660e56be7045bf2368"} Feb 18 15:20:13 crc kubenswrapper[4739]: I0218 15:20:13.040258 4739 generic.go:334] "Generic (PLEG): container finished" podID="ac03ed3e-3bdc-48cd-bf95-119b31b15208" containerID="3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8" exitCode=0 Feb 18 15:20:13 crc kubenswrapper[4739]: I0218 15:20:13.040344 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerDied","Data":"3d8147b125cb5878360a74eb88bb0e2f86a338193df75f8534e81151d855bde8"} Feb 18 15:20:15 crc kubenswrapper[4739]: I0218 15:20:15.065083 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-kp8qw" event={"ID":"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6","Type":"ContainerStarted","Data":"dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb"} Feb 18 15:20:15 crc kubenswrapper[4739]: I0218 15:20:15.068502 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" event={"ID":"ac03ed3e-3bdc-48cd-bf95-119b31b15208","Type":"ContainerStarted","Data":"7948fcc51192a2e4056032987b29fa6cf39414a1ecb40405336d586c238b0116"} Feb 18 15:20:15 crc kubenswrapper[4739]: I0218 15:20:15.096175 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-26llf/crc-debug-kp8qw" podStartSLOduration=2.172356903 podStartE2EDuration="16.096150687s" podCreationTimestamp="2026-02-18 15:19:59 +0000 UTC" firstStartedPulling="2026-02-18 15:20:00.122277717 +0000 UTC m=+4832.617998639" lastFinishedPulling="2026-02-18 15:20:14.046071501 +0000 UTC m=+4846.541792423" observedRunningTime="2026-02-18 15:20:15.089793917 +0000 UTC m=+4847.585514859" watchObservedRunningTime="2026-02-18 15:20:15.096150687 +0000 UTC m=+4847.591871619" Feb 18 15:20:29 crc kubenswrapper[4739]: I0218 15:20:29.372625 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:20:29 crc kubenswrapper[4739]: I0218 15:20:29.373234 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:20:31 crc kubenswrapper[4739]: I0218 15:20:31.109729 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:31 crc kubenswrapper[4739]: I0218 15:20:31.110082 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:51 crc kubenswrapper[4739]: I0218 15:20:51.125434 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:51 crc kubenswrapper[4739]: I0218 15:20:51.134387 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f5c56b6cc-ft74f" Feb 18 15:20:59 crc kubenswrapper[4739]: I0218 15:20:59.372827 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:20:59 crc kubenswrapper[4739]: I0218 15:20:59.373397 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:21:02 crc kubenswrapper[4739]: I0218 15:21:02.658238 4739 generic.go:334] "Generic (PLEG): container finished" podID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerID="dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb" exitCode=0 Feb 18 15:21:02 crc kubenswrapper[4739]: I0218 15:21:02.658840 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-kp8qw" event={"ID":"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6","Type":"ContainerDied","Data":"dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb"} Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.811606 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.851269 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/crc-debug-kp8qw"] Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.864303 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/crc-debug-kp8qw"] Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.942007 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") pod \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.942085 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") pod \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\" (UID: \"456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6\") " Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.942836 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host" (OuterVolumeSpecName: "host") pod "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" (UID: "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:21:03 crc kubenswrapper[4739]: I0218 15:21:03.950898 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4" (OuterVolumeSpecName: "kube-api-access-mf8f4") pod "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" (UID: "456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6"). InnerVolumeSpecName "kube-api-access-mf8f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.045537 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf8f4\" (UniqueName: \"kubernetes.io/projected/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-kube-api-access-mf8f4\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.045790 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.423872 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" path="/var/lib/kubelet/pods/456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6/volumes" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.693012 4739 scope.go:117] "RemoveContainer" containerID="dde35ceb92507110d1347cc0f0f430467fb8403374a8fefc912d25f552c9bfdb" Feb 18 15:21:04 crc kubenswrapper[4739]: I0218 15:21:04.693094 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-kp8qw" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.057041 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/crc-debug-jv5kf"] Feb 18 15:21:05 crc kubenswrapper[4739]: E0218 15:21:05.057653 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerName="container-00" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.057669 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerName="container-00" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.057954 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="456c8847-a5c4-43ee-8d46-eb5bf5b8c5d6" containerName="container-00" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.058995 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.172957 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.173199 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.275388 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.275840 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.275953 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.294667 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"crc-debug-jv5kf\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.376644 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:05 crc kubenswrapper[4739]: I0218 15:21:05.705663 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-jv5kf" event={"ID":"e2ca88e2-2ad2-41a2-ae52-74cf09b22275","Type":"ContainerStarted","Data":"fdddc444b46cae2a673c333898ccb719c75ad18fc7f6169f3dc5744334119cf3"} Feb 18 15:21:06 crc kubenswrapper[4739]: I0218 15:21:06.718949 4739 generic.go:334] "Generic (PLEG): container finished" podID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerID="7916fd68986056bd3242a9e47080df5316e2eaa9c4630168c7e653cc8da14d93" exitCode=0 Feb 18 15:21:06 crc kubenswrapper[4739]: I0218 15:21:06.719031 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-jv5kf" event={"ID":"e2ca88e2-2ad2-41a2-ae52-74cf09b22275","Type":"ContainerDied","Data":"7916fd68986056bd3242a9e47080df5316e2eaa9c4630168c7e653cc8da14d93"} Feb 18 15:21:07 crc kubenswrapper[4739]: I0218 15:21:07.884516 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.047049 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") pod \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.047222 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") pod \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\" (UID: \"e2ca88e2-2ad2-41a2-ae52-74cf09b22275\") " Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.047760 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host" (OuterVolumeSpecName: "host") pod "e2ca88e2-2ad2-41a2-ae52-74cf09b22275" (UID: "e2ca88e2-2ad2-41a2-ae52-74cf09b22275"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.048170 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.053095 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw" (OuterVolumeSpecName: "kube-api-access-xchnw") pod "e2ca88e2-2ad2-41a2-ae52-74cf09b22275" (UID: "e2ca88e2-2ad2-41a2-ae52-74cf09b22275"). InnerVolumeSpecName "kube-api-access-xchnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.152589 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xchnw\" (UniqueName: \"kubernetes.io/projected/e2ca88e2-2ad2-41a2-ae52-74cf09b22275-kube-api-access-xchnw\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.628310 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/crc-debug-jv5kf"] Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.640458 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/crc-debug-jv5kf"] Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.742372 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdddc444b46cae2a673c333898ccb719c75ad18fc7f6169f3dc5744334119cf3" Feb 18 15:21:08 crc kubenswrapper[4739]: I0218 15:21:08.742439 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-jv5kf" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.790085 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-26llf/crc-debug-csfkl"] Feb 18 15:21:09 crc kubenswrapper[4739]: E0218 15:21:09.790766 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerName="container-00" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.790784 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerName="container-00" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.791092 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" containerName="container-00" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.792151 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.898657 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:09 crc kubenswrapper[4739]: I0218 15:21:09.898749 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.001009 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.001145 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.001370 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.298549 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"crc-debug-csfkl\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.418515 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.428037 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ca88e2-2ad2-41a2-ae52-74cf09b22275" path="/var/lib/kubelet/pods/e2ca88e2-2ad2-41a2-ae52-74cf09b22275/volumes" Feb 18 15:21:10 crc kubenswrapper[4739]: I0218 15:21:10.765875 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-csfkl" event={"ID":"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d","Type":"ContainerStarted","Data":"067acd8d691f5999900b21458078b086cf69dbe3dd8ced0d38139f6e5b8c731c"} Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.778648 4739 generic.go:334] "Generic (PLEG): container finished" podID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerID="ab7dbc53680c705d78f186a04e28323aa311ec8027168cbbbaabc3a388c4677e" exitCode=0 Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.778712 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/crc-debug-csfkl" event={"ID":"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d","Type":"ContainerDied","Data":"ab7dbc53680c705d78f186a04e28323aa311ec8027168cbbbaabc3a388c4677e"} Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.821391 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/crc-debug-csfkl"] Feb 18 15:21:11 crc kubenswrapper[4739]: I0218 15:21:11.832846 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/crc-debug-csfkl"] Feb 18 15:21:12 crc kubenswrapper[4739]: I0218 15:21:12.919475 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.093780 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") pod \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.093937 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") pod \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\" (UID: \"af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d\") " Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.094040 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host" (OuterVolumeSpecName: "host") pod "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" (UID: "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.094807 4739 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-host\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.117000 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg" (OuterVolumeSpecName: "kube-api-access-dztqg") pod "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" (UID: "af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d"). InnerVolumeSpecName "kube-api-access-dztqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.197893 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dztqg\" (UniqueName: \"kubernetes.io/projected/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d-kube-api-access-dztqg\") on node \"crc\" DevicePath \"\"" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.804045 4739 scope.go:117] "RemoveContainer" containerID="ab7dbc53680c705d78f186a04e28323aa311ec8027168cbbbaabc3a388c4677e" Feb 18 15:21:13 crc kubenswrapper[4739]: I0218 15:21:13.804096 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/crc-debug-csfkl" Feb 18 15:21:14 crc kubenswrapper[4739]: I0218 15:21:14.423597 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" path="/var/lib/kubelet/pods/af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d/volumes" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.372568 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.373216 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.373270 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.374302 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:21:29 crc kubenswrapper[4739]: I0218 15:21:29.374368 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" gracePeriod=600 Feb 18 15:21:29 crc kubenswrapper[4739]: E0218 15:21:29.495096 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.013356 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" exitCode=0 Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.013417 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400"} Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.013483 4739 scope.go:117] "RemoveContainer" containerID="3ff0a839c3cd91b61bc5a9bec2e5ff1579fcf9258342af265e7f1b255f36409c" Feb 18 15:21:30 crc kubenswrapper[4739]: I0218 15:21:30.014482 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:21:30 crc kubenswrapper[4739]: E0218 15:21:30.014885 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:41 crc kubenswrapper[4739]: I0218 15:21:41.411162 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:21:41 crc kubenswrapper[4739]: E0218 15:21:41.422181 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.100978 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-api/0.log" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.877151 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-listener/0.log" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.899046 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-notifier/0.log" Feb 18 15:21:45 crc kubenswrapper[4739]: I0218 15:21:45.957707 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_44288fd5-6ac4-4d9f-b16e-97ae45b79030/aodh-evaluator/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.103444 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fccfc9568-dvccq_aca969df-0549-4d07-ada4-2e0515419a1d/barbican-api/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.129357 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5fccfc9568-dvccq_aca969df-0549-4d07-ada4-2e0515419a1d/barbican-api-log/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.216845 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-575dbd86bd-gjcs6_8f41089a-bbe1-4371-9a89-38423dca256c/barbican-keystone-listener/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.381116 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-575dbd86bd-gjcs6_8f41089a-bbe1-4371-9a89-38423dca256c/barbican-keystone-listener-log/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.406443 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765d88ff9c-smd7n_53848a1c-a5c5-4948-a45f-2ba01bc166ca/barbican-worker/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.459985 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-765d88ff9c-smd7n_53848a1c-a5c5-4948-a45f-2ba01bc166ca/barbican-worker-log/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.609197 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-tln9f_64a6af44-5f38-4ac7-a370-74b190762136/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.690761 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/ceilometer-central-agent/1.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.867560 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/ceilometer-central-agent/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.900543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/proxy-httpd/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.904094 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/ceilometer-notification-agent/0.log" Feb 18 15:21:46 crc kubenswrapper[4739]: I0218 15:21:46.968864 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2ce54677-cbd5-4ec2-a5ed-8ab12ecbeb7b/sg-core/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.128213 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_54fd1c90-48dd-4ae7-b2db-d80aa5f14a24/cinder-api-log/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.150370 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_54fd1c90-48dd-4ae7-b2db-d80aa5f14a24/cinder-api/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.326796 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff1a7d36-7f60-40b3-82ee-2fd64f780bc4/cinder-scheduler/2.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.400584 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff1a7d36-7f60-40b3-82ee-2fd64f780bc4/probe/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.403168 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff1a7d36-7f60-40b3-82ee-2fd64f780bc4/cinder-scheduler/1.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.570622 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-74l2j_c3fe82f6-0603-44f2-95fa-57ce24505d2c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.666072 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9jq24_8795d84c-3a90-438c-8f2b-066cd875316d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.790065 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hd9ps_703ba4cc-fc0d-4adf-bb13-62fecb68cff7/init/0.log" Feb 18 15:21:47 crc kubenswrapper[4739]: I0218 15:21:47.962342 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hd9ps_703ba4cc-fc0d-4adf-bb13-62fecb68cff7/init/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.009417 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7wbdv_ed059e6b-2560-487a-98a8-c1443d31cca9/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.024045 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-hd9ps_703ba4cc-fc0d-4adf-bb13-62fecb68cff7/dnsmasq-dns/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.226704 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ac763f9f-5faa-4559-8d07-960b3d30566b/glance-httpd/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.448186 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ac763f9f-5faa-4559-8d07-960b3d30566b/glance-log/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.451254 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_43f517de-033c-467c-9937-df5706ee1ca2/glance-log/0.log" Feb 18 15:21:48 crc kubenswrapper[4739]: I0218 15:21:48.477088 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_43f517de-033c-467c-9937-df5706ee1ca2/glance-httpd/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.276061 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5cfc6d5787-cxgnr_9c65abc8-9ca5-4a28-89d7-f5ffe23d1040/heat-api/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.283837 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5957545cb-6lrc2_26539513-f274-471e-ad4a-10bcd4758458/heat-engine/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.365701 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-8dd984b75-2cjs7_ecd1f6fa-009d-4942-98ad-203c31a7bf5b/heat-cfnapi/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.419747 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-klrh7_fc5c5a16-015a-48fe-a2c1-1954543e14bd/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.570765 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vglv4_af925314-bcd8-4373-b57e-612251a9687a/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.779692 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29523781-z64zk_28825764-dace-4769-b71e-4d55b8aa1d97/keystone-cron/0.log" Feb 18 15:21:49 crc kubenswrapper[4739]: I0218 15:21:49.879764 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3e688eb1-895d-465e-b5d9-a7b7ba9f4650/kube-state-metrics/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.210742 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-znm2n_bd7dea6a-d047-4a6c-809f-395a7cf418e8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.220022 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-nsjkf_61bf8a46-92c1-4b2e-9b8c-8206c618b98a/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.304297 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7dff988c46-72t9g_74cf9632-a7c0-4b6e-98ce-ebd6411a6594/keystone-api/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.486711 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_8143c3df-5224-4095-a65f-f9f005913b61/mysqld-exporter/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.816891 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77cbbcb957-6xzzv_6225bd93-c14b-4682-8e07-e6ca3cce37c9/neutron-httpd/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.887984 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-klw7j_015603d5-7d09-4388-a5d1-93c25d1b6344/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:50 crc kubenswrapper[4739]: I0218 15:21:50.902610 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77cbbcb957-6xzzv_6225bd93-c14b-4682-8e07-e6ca3cce37c9/neutron-api/0.log" Feb 18 15:21:51 crc kubenswrapper[4739]: I0218 15:21:51.639743 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_c35bd35d-d228-4223-a207-ea164d0c6b23/nova-cell0-conductor-conductor/0.log" Feb 18 15:21:51 crc kubenswrapper[4739]: I0218 15:21:51.671678 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3797374a-f0e4-4ba5-8974-c0049bad543a/nova-api-log/0.log" Feb 18 15:21:51 crc kubenswrapper[4739]: I0218 15:21:51.891303 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ffa018e5-ca81-4d0e-86f7-a9c6fb25fdd0/nova-cell1-conductor-conductor/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.013670 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ea00e513-02cf-4951-b9ec-50966f982142/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.049521 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3797374a-f0e4-4ba5-8974-c0049bad543a/nova-api-api/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.185832 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-mwcgw_08b26802-db14-4190-99d1-9c9c7403195b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.377501 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2ab30c1a-7b94-430a-ac85-ebe051fadbfe/nova-metadata-log/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.750269 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/mysql-bootstrap/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.825009 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_ba769c63-86fa-4971-afd8-4e3a57c94c37/nova-scheduler-scheduler/0.log" Feb 18 15:21:52 crc kubenswrapper[4739]: I0218 15:21:52.988400 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/galera/1.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.013797 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/mysql-bootstrap/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.070313 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_869aa11b-eba7-4598-90dc-d50c642b9120/galera/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.274505 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/mysql-bootstrap/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.537271 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/galera/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.573802 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/mysql-bootstrap/0.log" Feb 18 15:21:53 crc kubenswrapper[4739]: I0218 15:21:53.610229 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_acc9bbc5-8705-410b-977b-ca9245834e36/galera/1.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.147663 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_2ab30c1a-7b94-430a-ac85-ebe051fadbfe/nova-metadata-metadata/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.412868 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:21:54 crc kubenswrapper[4739]: E0218 15:21:54.414511 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.531081 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6699e575-f077-433c-a257-f65f329d6e69/openstackclient/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.548384 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q6g47_8daa97ee-3449-4043-8218-71aaa601c37c/openstack-network-exporter/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.703702 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_39286c8b-55e8-41a2-9f36-a7ce475e8313/memcached/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.720556 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovsdb-server-init/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.870775 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovsdb-server-init/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.899598 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovsdb-server/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.911610 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zz64p_7289493d-f197-436b-bc45-84721d12c034/ovn-controller/0.log" Feb 18 15:21:54 crc kubenswrapper[4739]: I0218 15:21:54.920182 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cglq_3d6d7ab5-2170-48ba-b9bf-40da1ab8fdf7/ovs-vswitchd/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.155166 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-g8rqb_c4382bff-5480-4a55-ad49-e6293729f738/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.229028 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3be45be-9ee4-4114-b2e5-78d9b0341129/ovn-northd/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.235399 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3be45be-9ee4-4114-b2e5-78d9b0341129/openstack-network-exporter/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.383733 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_22289461-6c53-461c-adfe-0f1cd7209928/openstack-network-exporter/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.393030 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_22289461-6c53-461c-adfe-0f1cd7209928/ovsdbserver-nb/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.639641 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74c434ad-eea8-4896-b65d-26eb1ca89f84/openstack-network-exporter/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.786329 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_74c434ad-eea8-4896-b65d-26eb1ca89f84/ovsdbserver-sb/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.931226 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65fbfb5b48-rchlc_38710bdf-e679-45f4-b3a6-597a3b1cb186/placement-api/0.log" Feb 18 15:21:55 crc kubenswrapper[4739]: I0218 15:21:55.956388 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65fbfb5b48-rchlc_38710bdf-e679-45f4-b3a6-597a3b1cb186/placement-log/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.672157 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/init-config-reloader/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.838967 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/init-config-reloader/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.868359 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/thanos-sidecar/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.868859 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/config-reloader/0.log" Feb 18 15:21:56 crc kubenswrapper[4739]: I0218 15:21:56.872538 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c16940-f153-4d15-891d-b0b91e9bce5a/prometheus/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.012983 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c71b6fb5-d59d-479d-b3fc-996d14bd93ed/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.211061 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd925294-7441-4ba8-af23-290ef19deb9b/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.229599 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c71b6fb5-d59d-479d-b3fc-996d14bd93ed/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.230724 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c71b6fb5-d59d-479d-b3fc-996d14bd93ed/rabbitmq/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.446908 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd925294-7441-4ba8-af23-290ef19deb9b/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.496792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bd925294-7441-4ba8-af23-290ef19deb9b/rabbitmq/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.505726 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_de0100ca-60e4-40d3-afeb-f5da9513fdc1/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.666745 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_de0100ca-60e4-40d3-afeb-f5da9513fdc1/setup-container/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.689680 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_de0100ca-60e4-40d3-afeb-f5da9513fdc1/rabbitmq/0.log" Feb 18 15:21:57 crc kubenswrapper[4739]: I0218 15:21:57.782250 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_83da58fc-6d28-4a56-abc1-00267082c6b6/setup-container/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.047606 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_83da58fc-6d28-4a56-abc1-00267082c6b6/setup-container/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.064592 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_83da58fc-6d28-4a56-abc1-00267082c6b6/rabbitmq/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.143515 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-m7xvr_c7a96416-0a9e-44f5-9200-755a99d4c38e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.263187 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8lfnc_ba2cd97a-cec6-45bc-a08c-b179dc0f72d6/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.339176 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-5lxb5_888c24c8-ed9b-4434-b55c-d9f89ba3f0eb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.457407 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jct96_18f01021-e95a-43e8-a660-1a2c9cb9d8c5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.573584 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-f68sz_63f139bc-490d-48b7-98c1-e29c8f583d90/ssh-known-hosts-edpm-deployment/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.753511 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-668fffc447-mjpk7_ac478be7-1c16-4a7f-a2d2-618cfe76c3d3/proxy-server/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.765361 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-668fffc447-mjpk7_ac478be7-1c16-4a7f-a2d2-618cfe76c3d3/proxy-httpd/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.854331 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-cfjpx_ab89b7a2-642d-4a99-9eb4-f01b2990e75d/swift-ring-rebalance/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.993808 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-auditor/0.log" Feb 18 15:21:58 crc kubenswrapper[4739]: I0218 15:21:58.996321 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-reaper/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.081434 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-replicator/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.088872 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-auditor/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.095325 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/account-server/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.222534 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-server/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.237889 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-replicator/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.287255 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-auditor/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.304742 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/container-updater/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.327012 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-expirer/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.424836 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-replicator/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.430401 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-server/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.472252 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/object-updater/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.513919 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/rsync/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.534611 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4da69d20-d4af-4d8d-b1e1-5026676d2078/swift-recon-cron/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.662060 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2jr8x_aa0510e7-f2a3-4466-b797-dab2e7ec0218/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.747305 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-z2sm8_76808ec1-db9d-494f-9d72-88b2bc28befb/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:21:59 crc kubenswrapper[4739]: I0218 15:21:59.967513 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fafc1147-dd3a-429c-ae6f-48865401c68b/test-operator-logs-container/0.log" Feb 18 15:22:00 crc kubenswrapper[4739]: I0218 15:22:00.076172 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lzpjh_884f40e4-492b-4f73-94a7-8be81bde150e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 15:22:00 crc kubenswrapper[4739]: I0218 15:22:00.129845 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2d70fa76-2eec-4ca5-abd7-44a082625a40/tempest-tests-tempest-tests-runner/0.log" Feb 18 15:22:08 crc kubenswrapper[4739]: I0218 15:22:08.420222 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:08 crc kubenswrapper[4739]: E0218 15:22:08.421248 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:22:22 crc kubenswrapper[4739]: I0218 15:22:22.413777 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:22 crc kubenswrapper[4739]: E0218 15:22:22.416805 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.326681 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/util/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.514491 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/pull/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.539187 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/util/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.578136 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/pull/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.767609 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/extract/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.793237 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/util/0.log" Feb 18 15:22:30 crc kubenswrapper[4739]: I0218 15:22:30.812793 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5fa593b11d514d0a7be460a834e869028c01bb0b0d1b03b6172f8c46aewqgtq_d9dd3a53-7ae3-4da0-ad4a-fcd8f6fb1c90/pull/0.log" Feb 18 15:22:31 crc kubenswrapper[4739]: I0218 15:22:31.372097 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-47445_c8f419fe-23b1-4a93-97fe-05071df32425/manager/0.log" Feb 18 15:22:31 crc kubenswrapper[4739]: I0218 15:22:31.740557 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-hxdbh_19470a60-c796-4a28-a0e2-65b50fa94ea6/manager/0.log" Feb 18 15:22:31 crc kubenswrapper[4739]: I0218 15:22:31.996423 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-m469j_60bad312-a989-43d1-87e6-6c6f10d1ae8f/manager/0.log" Feb 18 15:22:32 crc kubenswrapper[4739]: I0218 15:22:32.235436 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-xhkdh_877f7fe3-168f-4b05-a88e-a7a11bf45e36/manager/0.log" Feb 18 15:22:32 crc kubenswrapper[4739]: I0218 15:22:32.825786 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-hrxn2_fb608395-17b5-4b92-a0be-b5abc08ac979/manager/1.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.051771 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-hrxn2_fb608395-17b5-4b92-a0be-b5abc08ac979/manager/0.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.101306 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-54k4b_b1d0315e-6ccb-4c6a-a488-98454bb41358/manager/0.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.411722 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:33 crc kubenswrapper[4739]: E0218 15:22:33.412168 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.519495 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-q4vb2_2e8e2d9d-fbfe-409e-bf3e-ea47e48e1682/manager/0.log" Feb 18 15:22:33 crc kubenswrapper[4739]: I0218 15:22:33.795237 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-prt26_209f2e6c-29e9-444b-b14a-10eadb782a59/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.083529 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-b9hds_d617f67f-2577-418f-a367-42c366c17980/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.422807 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-8vh65_92f1b9c3-1bdd-48ca-9a76-68ace2635cf1/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.494940 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-cdt9l_3b114d0a-837c-4f0c-b02a-db694bdab362/manager/0.log" Feb 18 15:22:34 crc kubenswrapper[4739]: I0218 15:22:34.832660 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-rk7x9_40be8fff-51f0-467a-aca5-517e02eea23b/manager/0.log" Feb 18 15:22:35 crc kubenswrapper[4739]: I0218 15:22:35.144516 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl_52927612-b074-4573-aa63-41cbb1d704bf/manager/1.log" Feb 18 15:22:35 crc kubenswrapper[4739]: I0218 15:22:35.366208 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9ckgksl_52927612-b074-4573-aa63-41cbb1d704bf/manager/0.log" Feb 18 15:22:35 crc kubenswrapper[4739]: I0218 15:22:35.837674 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5864f6ff6b-7n5hc_8bf4ed0a-8055-462b-9324-1fa1c4f429b1/operator/0.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.239990 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cnhvq_07815587-810f-4c17-a671-8c613b3755d6/registry-server/1.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.375276 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-4f4zc_d34f7233-92b8-4803-ab81-0da45a4de925/manager/1.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.455792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cnhvq_07815587-810f-4c17-a671-8c613b3755d6/registry-server/0.log" Feb 18 15:22:36 crc kubenswrapper[4739]: I0218 15:22:36.806500 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-4lkbs_8336a5f7-2ff0-440a-88b0-a6ab51692965/manager/0.log" Feb 18 15:22:37 crc kubenswrapper[4739]: I0218 15:22:37.361977 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-lmvdv_e19083b1-791a-4549-b64e-0bb0032abad2/manager/0.log" Feb 18 15:22:37 crc kubenswrapper[4739]: I0218 15:22:37.593148 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7gszz_06163b75-4f40-42a0-83d8-70c935b9172c/operator/0.log" Feb 18 15:22:37 crc kubenswrapper[4739]: I0218 15:22:37.881120 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-s7fsm_ac911184-3930-4f7e-9d77-2cc9e7262ea6/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.464296 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-4f4zc_d34f7233-92b8-4803-ab81-0da45a4de925/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.544733 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-jblfh_6741b4b4-1817-4639-bdf6-b5be2729a1fa/manager/1.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.693248 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7954588dd9-trg52_8add2ed9-6416-4e9f-a3a1-f8a615962850/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.721254 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6956d67c5c-52bt7_538f0d59-9eea-4f76-a310-f7f724593a1e/manager/0.log" Feb 18 15:22:38 crc kubenswrapper[4739]: I0218 15:22:38.721972 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-jblfh_6741b4b4-1817-4639-bdf6-b5be2729a1fa/manager/0.log" Feb 18 15:22:39 crc kubenswrapper[4739]: I0218 15:22:39.470516 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-kssdd_caed7b7d-66db-4bd9-ba33-efc5f3951069/manager/0.log" Feb 18 15:22:45 crc kubenswrapper[4739]: I0218 15:22:45.487991 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-knpz9_61bc4b17-baf6-435c-9280-b97fcede913c/manager/0.log" Feb 18 15:22:47 crc kubenswrapper[4739]: I0218 15:22:47.411890 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:22:47 crc kubenswrapper[4739]: E0218 15:22:47.412899 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:02 crc kubenswrapper[4739]: I0218 15:23:02.411089 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:02 crc kubenswrapper[4739]: E0218 15:23:02.411895 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:03 crc kubenswrapper[4739]: I0218 15:23:03.545786 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pgswj_ffd4b935-0435-4a73-a7cd-596856c63f84/control-plane-machine-set-operator/0.log" Feb 18 15:23:03 crc kubenswrapper[4739]: I0218 15:23:03.882271 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sqm9s_d41d7405-9b25-414a-a247-1d945df68f89/kube-rbac-proxy/0.log" Feb 18 15:23:03 crc kubenswrapper[4739]: I0218 15:23:03.941076 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-sqm9s_d41d7405-9b25-414a-a247-1d945df68f89/machine-api-operator/0.log" Feb 18 15:23:13 crc kubenswrapper[4739]: I0218 15:23:13.411088 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:13 crc kubenswrapper[4739]: E0218 15:23:13.411896 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:18 crc kubenswrapper[4739]: I0218 15:23:18.623214 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-bfgbz_4a1588a0-096b-4e77-b251-f034a57c7a04/cert-manager-controller/0.log" Feb 18 15:23:18 crc kubenswrapper[4739]: I0218 15:23:18.846277 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xl5rj_09228bff-e02a-4a38-86ab-3d18492c3fa1/cert-manager-cainjector/0.log" Feb 18 15:23:18 crc kubenswrapper[4739]: I0218 15:23:18.953694 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-927qr_c9731232-5945-414d-bf7c-cd9207130675/cert-manager-webhook/0.log" Feb 18 15:23:27 crc kubenswrapper[4739]: I0218 15:23:27.419193 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:27 crc kubenswrapper[4739]: E0218 15:23:27.420806 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.177833 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-c8h9g_292e9bf2-9674-423f-9ba5-4e83ff259a06/nmstate-console-plugin/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.396900 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4l8z8_3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b/kube-rbac-proxy/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.414247 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xwm5v_547a8c99-05a3-45bf-9e45-785d6cdb8fb5/nmstate-handler/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.585899 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4l8z8_3bc7475a-7f37-4d47-a7e8-2c58a37c7c0b/nmstate-metrics/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.598428 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-77rqb_2f5c1234-49df-4f31-842f-cdaf04adff3c/nmstate-operator/0.log" Feb 18 15:23:32 crc kubenswrapper[4739]: I0218 15:23:32.753312 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-wtz97_ff0bf868-48fc-48a7-845d-3286c1dd16f0/nmstate-webhook/0.log" Feb 18 15:23:40 crc kubenswrapper[4739]: I0218 15:23:40.410936 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:40 crc kubenswrapper[4739]: E0218 15:23:40.411648 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:46 crc kubenswrapper[4739]: I0218 15:23:46.814786 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/kube-rbac-proxy/0.log" Feb 18 15:23:46 crc kubenswrapper[4739]: I0218 15:23:46.852687 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/1.log" Feb 18 15:23:47 crc kubenswrapper[4739]: I0218 15:23:47.485788 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/0.log" Feb 18 15:23:53 crc kubenswrapper[4739]: I0218 15:23:53.410667 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:23:53 crc kubenswrapper[4739]: E0218 15:23:53.411391 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.700928 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-c9tcc_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc/prometheus-operator/0.log" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.807182 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_3d337f75-bb26-461d-9519-f17c333cfc55/prometheus-operator-admission-webhook/0.log" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.950838 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_e257eada-747c-4c16-ade0-64120ce08e5b/prometheus-operator-admission-webhook/0.log" Feb 18 15:23:59 crc kubenswrapper[4739]: I0218 15:23:59.990537 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/1.log" Feb 18 15:24:00 crc kubenswrapper[4739]: I0218 15:24:00.122066 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/0.log" Feb 18 15:24:00 crc kubenswrapper[4739]: I0218 15:24:00.157927 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m5hn7_7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b/observability-ui-dashboards/0.log" Feb 18 15:24:00 crc kubenswrapper[4739]: I0218 15:24:00.298856 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-lpf5k_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe/perses-operator/0.log" Feb 18 15:24:05 crc kubenswrapper[4739]: I0218 15:24:05.411578 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:05 crc kubenswrapper[4739]: E0218 15:24:05.412296 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:14 crc kubenswrapper[4739]: I0218 15:24:14.752436 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-54nln_4b0da132-982d-47b8-ae8a-d0529fbfe6a4/cluster-logging-operator/0.log" Feb 18 15:24:14 crc kubenswrapper[4739]: I0218 15:24:14.961687 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ptdrt_3d3df5da-d291-44d1-890f-4f094d9e8301/collector/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.020769 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_8cadd086-3e21-4dfc-9577-356fdcfe83c1/loki-compactor/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.144600 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-68g9x_d2537052-1467-4892-afe4-cafbbdfbd645/loki-distributor/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.275751 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-nd7jd_717b73b9-8190-41ce-8513-eb314a37cdfd/gateway/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.292669 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-nd7jd_717b73b9-8190-41ce-8513-eb314a37cdfd/opa/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.435938 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-whgjq_82d2d64c-4971-48ee-a75c-30adadf054de/gateway/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.470677 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5f9bf547f9-whgjq_82d2d64c-4971-48ee-a75c-30adadf054de/opa/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.591417 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_d13e1961-45de-4db2-a4cb-04c91c7b18ad/loki-index-gateway/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.756310 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_bfabc0be-78aa-4cf2-ae16-6d226b95be03/loki-ingester/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.788037 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-ccsmg_3886312a-0449-43cc-b914-a4633b2c7e80/loki-querier/0.log" Feb 18 15:24:15 crc kubenswrapper[4739]: I0218 15:24:15.937214 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-grbnx_f6ad99a5-d1e9-44a4-bf58-b2085ac14b4b/loki-query-frontend/0.log" Feb 18 15:24:17 crc kubenswrapper[4739]: I0218 15:24:17.410343 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:17 crc kubenswrapper[4739]: E0218 15:24:17.410986 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:29 crc kubenswrapper[4739]: I0218 15:24:29.410989 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:29 crc kubenswrapper[4739]: E0218 15:24:29.411876 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:31 crc kubenswrapper[4739]: I0218 15:24:31.935836 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-tr2nx_7bcf09d7-a0a6-4225-a222-1c05f51e5f7d/controller/1.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.104340 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-tr2nx_7bcf09d7-a0a6-4225-a222-1c05f51e5f7d/controller/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.201473 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-tr2nx_7bcf09d7-a0a6-4225-a222-1c05f51e5f7d/kube-rbac-proxy/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.241310 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.444163 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.475207 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.498680 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.546647 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.733939 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.774870 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.825075 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:32 crc kubenswrapper[4739]: I0218 15:24:32.878845 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.041176 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-metrics/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.045253 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-reloader/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.060099 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/cp-frr-files/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.116398 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/controller/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.268398 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/frr/1.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.277260 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/frr-metrics/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.347877 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/kube-rbac-proxy/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.532536 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/kube-rbac-proxy-frr/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.590692 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/reloader/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.750792 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-q8h4v_bf495248-0dde-4619-bce7-2cbbda1fd646/frr-k8s-webhook-server/0.log" Feb 18 15:24:33 crc kubenswrapper[4739]: I0218 15:24:33.878489 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b78699c88-r8kr2_d5023d08-507d-422f-b218-72057e18ef93/manager/1.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.032299 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b78699c88-r8kr2_d5023d08-507d-422f-b218-72057e18ef93/manager/0.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.103712 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86f6cb9d5d-8jd6g_0183ebc4-768c-4e08-8f1c-059fff8ba4e3/webhook-server/1.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.311487 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86f6cb9d5d-8jd6g_0183ebc4-768c-4e08-8f1c-059fff8ba4e3/webhook-server/0.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.493725 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gqkq_65fdc711-6806-433f-9f62-a09e816c6acf/kube-rbac-proxy/0.log" Feb 18 15:24:34 crc kubenswrapper[4739]: I0218 15:24:34.666228 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-w8l6z_8ee20c2c-abb7-44a8-a5f9-8cacfce6f781/frr/0.log" Feb 18 15:24:35 crc kubenswrapper[4739]: I0218 15:24:35.047282 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gqkq_65fdc711-6806-433f-9f62-a09e816c6acf/speaker/1.log" Feb 18 15:24:35 crc kubenswrapper[4739]: I0218 15:24:35.247465 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8gqkq_65fdc711-6806-433f-9f62-a09e816c6acf/speaker/0.log" Feb 18 15:24:40 crc kubenswrapper[4739]: I0218 15:24:40.411093 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:40 crc kubenswrapper[4739]: E0218 15:24:40.413121 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.543157 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/util/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.712801 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/util/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.769868 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/pull/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.804022 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/pull/0.log" Feb 18 15:24:49 crc kubenswrapper[4739]: I0218 15:24:49.989690 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/util/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.033954 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/pull/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.040746 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19thnc7_4fece5bf-a118-4158-9879-3b4ca9e751af/extract/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.172547 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/util/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.333162 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/util/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.394832 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/pull/0.log" Feb 18 15:24:50 crc kubenswrapper[4739]: I0218 15:24:50.401897 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/pull/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.175754 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/pull/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.183848 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/extract/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.207756 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0885ck8_8d944a4d-4b9c-43f2-be16-0f222b4cb0c9/util/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.406809 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/util/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.868672 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/pull/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.870791 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/util/0.log" Feb 18 15:24:51 crc kubenswrapper[4739]: I0218 15:24:51.874250 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/pull/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.090762 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/util/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.114293 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/extract/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.128222 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213b9v6l_0e9e5f51-e676-4cb2-8e3e-b07341a3029a/pull/0.log" Feb 18 15:24:52 crc kubenswrapper[4739]: I0218 15:24:52.291657 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.082070 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.087577 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.108364 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.299957 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.306099 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.410823 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:24:53 crc kubenswrapper[4739]: E0218 15:24:53.411144 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.544648 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.736664 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-utilities/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.783795 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-content/0.log" Feb 18 15:24:53 crc kubenswrapper[4739]: I0218 15:24:53.849201 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-content/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.039648 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-utilities/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.086745 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/extract-content/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.276705 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/util/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.456099 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v6sbz_c0ff243b-1f5d-4ab1-af8c-38a98b870d27/registry-server/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.559389 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/util/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.604496 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/pull/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.618831 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fmqk2_f143bfcf-f351-4ede-ab73-311c97dcb20d/registry-server/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.627089 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/pull/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.804566 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/pull/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.812953 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/util/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.834566 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989jrj9d_517d6503-525a-420f-b4e7-1732df952bd4/extract/0.log" Feb 18 15:24:54 crc kubenswrapper[4739]: I0218 15:24:54.866068 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/util/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.091020 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/pull/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.138655 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/pull/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.150645 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/util/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.341476 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/util/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.351742 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/extract/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.391354 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28vcn_0dc6acff-649a-4e95-ba42-ad79dae4a787/marketplace-operator/1.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.407905 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecagnw4g_6bd02fb2-605c-422a-9c28-67afe997782a/pull/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.541510 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-28vcn_0dc6acff-649a-4e95-ba42-ad79dae4a787/marketplace-operator/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.600531 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-utilities/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.780657 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-utilities/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.798259 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-content/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.798361 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-content/0.log" Feb 18 15:24:55 crc kubenswrapper[4739]: I0218 15:24:55.978146 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-utilities/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.276912 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.335333 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-utilities/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.467185 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-p4z7n_0cc54472-7fa4-457e-a332-420ce4a7da93/registry-server/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.531496 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-utilities/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.558927 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.573387 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.736077 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-content/0.log" Feb 18 15:24:56 crc kubenswrapper[4739]: I0218 15:24:56.743531 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/extract-utilities/0.log" Feb 18 15:24:57 crc kubenswrapper[4739]: I0218 15:24:57.199394 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hvzqm_c2f46b1c-aab8-49aa-936d-40da9b28333b/registry-server/0.log" Feb 18 15:25:06 crc kubenswrapper[4739]: I0218 15:25:06.411041 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:06 crc kubenswrapper[4739]: E0218 15:25:06.413428 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.329054 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-49bj6_3d337f75-bb26-461d-9519-f17c333cfc55/prometheus-operator-admission-webhook/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.340228 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-547f5ff-7mn2h_e257eada-747c-4c16-ade0-64120ce08e5b/prometheus-operator-admission-webhook/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.364930 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-c9tcc_ef4587aa-49cd-4fd3-a5e6-05b0b5139cbc/prometheus-operator/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.582504 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/1.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.617736 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m5hn7_7b9ec1ac-cb5f-4d36-8576-d039f5d85e1b/observability-ui-dashboards/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.641965 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-mqkqw_0348c042-11c0-4a27-a8d4-04beea8e11a3/operator/0.log" Feb 18 15:25:11 crc kubenswrapper[4739]: I0218 15:25:11.680990 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-lpf5k_2a79887e-1b6d-44ed-b3e1-f1c7c65b48fe/perses-operator/0.log" Feb 18 15:25:18 crc kubenswrapper[4739]: I0218 15:25:18.420917 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:18 crc kubenswrapper[4739]: E0218 15:25:18.421751 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:27 crc kubenswrapper[4739]: I0218 15:25:27.269130 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/kube-rbac-proxy/0.log" Feb 18 15:25:27 crc kubenswrapper[4739]: I0218 15:25:27.311962 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/1.log" Feb 18 15:25:27 crc kubenswrapper[4739]: I0218 15:25:27.365543 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-7c7d667b45-kx8bw_4091e4df-be25-4e94-bf12-7079a8ce9b5f/manager/0.log" Feb 18 15:25:31 crc kubenswrapper[4739]: I0218 15:25:31.410436 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:31 crc kubenswrapper[4739]: E0218 15:25:31.411314 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:43 crc kubenswrapper[4739]: I0218 15:25:43.411380 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:43 crc kubenswrapper[4739]: E0218 15:25:43.412026 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:25:56 crc kubenswrapper[4739]: I0218 15:25:56.411042 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:25:56 crc kubenswrapper[4739]: E0218 15:25:56.411879 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:26:07 crc kubenswrapper[4739]: I0218 15:26:07.410723 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:26:07 crc kubenswrapper[4739]: E0218 15:26:07.411552 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:26:22 crc kubenswrapper[4739]: I0218 15:26:22.410814 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:26:22 crc kubenswrapper[4739]: E0218 15:26:22.411881 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:26:33 crc kubenswrapper[4739]: I0218 15:26:33.411881 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:26:34 crc kubenswrapper[4739]: I0218 15:26:34.563714 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d"} Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.021026 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:24 crc kubenswrapper[4739]: E0218 15:27:24.022139 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerName="container-00" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.022154 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerName="container-00" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.022432 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="af97dec6-dccd-4c5d-aa2d-a2c1dfd5685d" containerName="container-00" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.026668 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.048531 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.165543 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.165695 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.165772 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.267644 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.267724 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.267775 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.268696 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.268790 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.295187 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"certified-operators-xpngx\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:24 crc kubenswrapper[4739]: I0218 15:27:24.348686 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:25 crc kubenswrapper[4739]: I0218 15:27:25.455726 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.281488 4739 generic.go:334] "Generic (PLEG): container finished" podID="1e938c96-4652-419c-97ce-90bb1d83768a" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" exitCode=0 Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.281583 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58"} Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.281794 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerStarted","Data":"be7eb90f9f21d375f11e8bed13d64d9c69e389afd98358691bb486d4b1e02662"} Feb 18 15:27:26 crc kubenswrapper[4739]: I0218 15:27:26.285706 4739 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 15:27:27 crc kubenswrapper[4739]: I0218 15:27:27.295082 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerStarted","Data":"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677"} Feb 18 15:27:29 crc kubenswrapper[4739]: I0218 15:27:29.325980 4739 generic.go:334] "Generic (PLEG): container finished" podID="1e938c96-4652-419c-97ce-90bb1d83768a" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" exitCode=0 Feb 18 15:27:29 crc kubenswrapper[4739]: I0218 15:27:29.326737 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677"} Feb 18 15:27:30 crc kubenswrapper[4739]: I0218 15:27:30.340049 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerStarted","Data":"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2"} Feb 18 15:27:30 crc kubenswrapper[4739]: I0218 15:27:30.367124 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xpngx" podStartSLOduration=3.849306431 podStartE2EDuration="7.367102638s" podCreationTimestamp="2026-02-18 15:27:23 +0000 UTC" firstStartedPulling="2026-02-18 15:27:26.284129413 +0000 UTC m=+5278.779850335" lastFinishedPulling="2026-02-18 15:27:29.80192562 +0000 UTC m=+5282.297646542" observedRunningTime="2026-02-18 15:27:30.35839706 +0000 UTC m=+5282.854118002" watchObservedRunningTime="2026-02-18 15:27:30.367102638 +0000 UTC m=+5282.862823570" Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.349327 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.350172 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.401970 4739 generic.go:334] "Generic (PLEG): container finished" podID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" exitCode=0 Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.402026 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-26llf/must-gather-vps8f" event={"ID":"205cb55b-f489-4c55-aa9e-13f9ff38def6","Type":"ContainerDied","Data":"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a"} Feb 18 15:27:34 crc kubenswrapper[4739]: I0218 15:27:34.403099 4739 scope.go:117] "RemoveContainer" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:35 crc kubenswrapper[4739]: I0218 15:27:35.341687 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-26llf_must-gather-vps8f_205cb55b-f489-4c55-aa9e-13f9ff38def6/gather/0.log" Feb 18 15:27:35 crc kubenswrapper[4739]: I0218 15:27:35.409187 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xpngx" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" probeResult="failure" output=< Feb 18 15:27:35 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:27:35 crc kubenswrapper[4739]: > Feb 18 15:27:39 crc kubenswrapper[4739]: I0218 15:27:39.397315 4739 scope.go:117] "RemoveContainer" containerID="7916fd68986056bd3242a9e47080df5316e2eaa9c4630168c7e653cc8da14d93" Feb 18 15:27:44 crc kubenswrapper[4739]: I0218 15:27:44.423429 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:44 crc kubenswrapper[4739]: I0218 15:27:44.479898 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:44 crc kubenswrapper[4739]: I0218 15:27:44.667387 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:45 crc kubenswrapper[4739]: I0218 15:27:45.530119 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xpngx" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" containerID="cri-o://64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" gracePeriod=2 Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.096839 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.232402 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") pod \"1e938c96-4652-419c-97ce-90bb1d83768a\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.232497 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") pod \"1e938c96-4652-419c-97ce-90bb1d83768a\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.232828 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") pod \"1e938c96-4652-419c-97ce-90bb1d83768a\" (UID: \"1e938c96-4652-419c-97ce-90bb1d83768a\") " Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.235028 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities" (OuterVolumeSpecName: "utilities") pod "1e938c96-4652-419c-97ce-90bb1d83768a" (UID: "1e938c96-4652-419c-97ce-90bb1d83768a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.241613 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn" (OuterVolumeSpecName: "kube-api-access-q2ztn") pod "1e938c96-4652-419c-97ce-90bb1d83768a" (UID: "1e938c96-4652-419c-97ce-90bb1d83768a"). InnerVolumeSpecName "kube-api-access-q2ztn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.298714 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e938c96-4652-419c-97ce-90bb1d83768a" (UID: "1e938c96-4652-419c-97ce-90bb1d83768a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.338364 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ztn\" (UniqueName: \"kubernetes.io/projected/1e938c96-4652-419c-97ce-90bb1d83768a-kube-api-access-q2ztn\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.338400 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.338410 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e938c96-4652-419c-97ce-90bb1d83768a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541742 4739 generic.go:334] "Generic (PLEG): container finished" podID="1e938c96-4652-419c-97ce-90bb1d83768a" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" exitCode=0 Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541788 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2"} Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541819 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xpngx" event={"ID":"1e938c96-4652-419c-97ce-90bb1d83768a","Type":"ContainerDied","Data":"be7eb90f9f21d375f11e8bed13d64d9c69e389afd98358691bb486d4b1e02662"} Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541837 4739 scope.go:117] "RemoveContainer" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.541846 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xpngx" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.570534 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.572133 4739 scope.go:117] "RemoveContainer" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.583986 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xpngx"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.596076 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.596375 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-26llf/must-gather-vps8f" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" containerID="cri-o://18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" gracePeriod=2 Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.598244 4739 scope.go:117] "RemoveContainer" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.607832 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-26llf/must-gather-vps8f"] Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.814419 4739 scope.go:117] "RemoveContainer" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" Feb 18 15:27:46 crc kubenswrapper[4739]: E0218 15:27:46.815359 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2\": container with ID starting with 64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2 not found: ID does not exist" containerID="64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.815434 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2"} err="failed to get container status \"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2\": rpc error: code = NotFound desc = could not find container \"64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2\": container with ID starting with 64c49466ba32ef89697fbdf8e1b0cb403853f7e3df187d50d2072287b17e8ad2 not found: ID does not exist" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.815519 4739 scope.go:117] "RemoveContainer" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" Feb 18 15:27:46 crc kubenswrapper[4739]: E0218 15:27:46.816547 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677\": container with ID starting with 570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677 not found: ID does not exist" containerID="570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.816590 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677"} err="failed to get container status \"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677\": rpc error: code = NotFound desc = could not find container \"570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677\": container with ID starting with 570d73fae32702625f69d7fa7b6b0d5b6390bc135c022d423ac2e21adf52a677 not found: ID does not exist" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.816633 4739 scope.go:117] "RemoveContainer" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" Feb 18 15:27:46 crc kubenswrapper[4739]: E0218 15:27:46.819851 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58\": container with ID starting with 28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58 not found: ID does not exist" containerID="28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58" Feb 18 15:27:46 crc kubenswrapper[4739]: I0218 15:27:46.819902 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58"} err="failed to get container status \"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58\": rpc error: code = NotFound desc = could not find container \"28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58\": container with ID starting with 28c0a204e2a6fab0f2b8d3e6adc5cd78b24dc0000c5bfbd94bb531d2ef39fb58 not found: ID does not exist" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.228588 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-26llf_must-gather-vps8f_205cb55b-f489-4c55-aa9e-13f9ff38def6/copy/0.log" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.229146 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.364914 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") pod \"205cb55b-f489-4c55-aa9e-13f9ff38def6\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.364975 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") pod \"205cb55b-f489-4c55-aa9e-13f9ff38def6\" (UID: \"205cb55b-f489-4c55-aa9e-13f9ff38def6\") " Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.384832 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z" (OuterVolumeSpecName: "kube-api-access-hkm4z") pod "205cb55b-f489-4c55-aa9e-13f9ff38def6" (UID: "205cb55b-f489-4c55-aa9e-13f9ff38def6"). InnerVolumeSpecName "kube-api-access-hkm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.471125 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkm4z\" (UniqueName: \"kubernetes.io/projected/205cb55b-f489-4c55-aa9e-13f9ff38def6-kube-api-access-hkm4z\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.535543 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "205cb55b-f489-4c55-aa9e-13f9ff38def6" (UID: "205cb55b-f489-4c55-aa9e-13f9ff38def6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.553389 4739 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-26llf_must-gather-vps8f_205cb55b-f489-4c55-aa9e-13f9ff38def6/copy/0.log" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.554846 4739 generic.go:334] "Generic (PLEG): container finished" podID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" exitCode=143 Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.554932 4739 scope.go:117] "RemoveContainer" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.555795 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-26llf/must-gather-vps8f" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.574238 4739 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/205cb55b-f489-4c55-aa9e-13f9ff38def6-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.579008 4739 scope.go:117] "RemoveContainer" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.638579 4739 scope.go:117] "RemoveContainer" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" Feb 18 15:27:47 crc kubenswrapper[4739]: E0218 15:27:47.639173 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276\": container with ID starting with 18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276 not found: ID does not exist" containerID="18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.639208 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276"} err="failed to get container status \"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276\": rpc error: code = NotFound desc = could not find container \"18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276\": container with ID starting with 18022deb0268d47bf90440c767a7078cea39460ba6ce32fa4f71fe972aa1f276 not found: ID does not exist" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.639233 4739 scope.go:117] "RemoveContainer" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:47 crc kubenswrapper[4739]: E0218 15:27:47.639523 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a\": container with ID starting with b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a not found: ID does not exist" containerID="b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a" Feb 18 15:27:47 crc kubenswrapper[4739]: I0218 15:27:47.639549 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a"} err="failed to get container status \"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a\": rpc error: code = NotFound desc = could not find container \"b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a\": container with ID starting with b576fd4f776c1394d871a2bb9e789b84d56bb27921fe7c095d6f0f57fab3356a not found: ID does not exist" Feb 18 15:27:48 crc kubenswrapper[4739]: I0218 15:27:48.425815 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" path="/var/lib/kubelet/pods/1e938c96-4652-419c-97ce-90bb1d83768a/volumes" Feb 18 15:27:48 crc kubenswrapper[4739]: I0218 15:27:48.426957 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" path="/var/lib/kubelet/pods/205cb55b-f489-4c55-aa9e-13f9ff38def6/volumes" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.552486 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555101 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555262 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555341 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="gather" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555400 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="gather" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555488 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-content" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555563 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-content" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555648 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-utilities" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555703 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="extract-utilities" Feb 18 15:28:03 crc kubenswrapper[4739]: E0218 15:28:03.555773 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.555830 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.556190 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e938c96-4652-419c-97ce-90bb1d83768a" containerName="registry-server" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.556306 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="copy" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.556376 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="205cb55b-f489-4c55-aa9e-13f9ff38def6" containerName="gather" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.558703 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.567222 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.587395 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.587510 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.587810 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.689600 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.689700 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.689770 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.690416 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.690643 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.712174 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"community-operators-qxkb4\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:03 crc kubenswrapper[4739]: I0218 15:28:03.886853 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:04 crc kubenswrapper[4739]: I0218 15:28:04.407849 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:05 crc kubenswrapper[4739]: I0218 15:28:05.759011 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" exitCode=0 Feb 18 15:28:05 crc kubenswrapper[4739]: I0218 15:28:05.759247 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25"} Feb 18 15:28:05 crc kubenswrapper[4739]: I0218 15:28:05.759269 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerStarted","Data":"ea2cef7b03a77ba255750eb431c397b926fb1e4142bd2cb031d62aba0eddbe71"} Feb 18 15:28:07 crc kubenswrapper[4739]: I0218 15:28:07.788047 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerStarted","Data":"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6"} Feb 18 15:28:08 crc kubenswrapper[4739]: I0218 15:28:08.801574 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" exitCode=0 Feb 18 15:28:08 crc kubenswrapper[4739]: I0218 15:28:08.801676 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6"} Feb 18 15:28:09 crc kubenswrapper[4739]: I0218 15:28:09.828404 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerStarted","Data":"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f"} Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.887488 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.888049 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.946375 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:13 crc kubenswrapper[4739]: I0218 15:28:13.968205 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qxkb4" podStartSLOduration=7.440956607 podStartE2EDuration="10.96818788s" podCreationTimestamp="2026-02-18 15:28:03 +0000 UTC" firstStartedPulling="2026-02-18 15:28:05.76082594 +0000 UTC m=+5318.256546862" lastFinishedPulling="2026-02-18 15:28:09.288057223 +0000 UTC m=+5321.783778135" observedRunningTime="2026-02-18 15:28:09.85910422 +0000 UTC m=+5322.354825152" watchObservedRunningTime="2026-02-18 15:28:13.96818788 +0000 UTC m=+5326.463908812" Feb 18 15:28:14 crc kubenswrapper[4739]: I0218 15:28:14.938306 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:15 crc kubenswrapper[4739]: I0218 15:28:15.003623 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:16 crc kubenswrapper[4739]: I0218 15:28:16.914436 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qxkb4" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" containerID="cri-o://2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" gracePeriod=2 Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.484276 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.541413 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") pod \"7a65b49b-3fa1-452b-8859-a16f38792f96\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.541562 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") pod \"7a65b49b-3fa1-452b-8859-a16f38792f96\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.541665 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") pod \"7a65b49b-3fa1-452b-8859-a16f38792f96\" (UID: \"7a65b49b-3fa1-452b-8859-a16f38792f96\") " Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.542670 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities" (OuterVolumeSpecName: "utilities") pod "7a65b49b-3fa1-452b-8859-a16f38792f96" (UID: "7a65b49b-3fa1-452b-8859-a16f38792f96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.543165 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.550026 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw" (OuterVolumeSpecName: "kube-api-access-vvbcw") pod "7a65b49b-3fa1-452b-8859-a16f38792f96" (UID: "7a65b49b-3fa1-452b-8859-a16f38792f96"). InnerVolumeSpecName "kube-api-access-vvbcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.601706 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a65b49b-3fa1-452b-8859-a16f38792f96" (UID: "7a65b49b-3fa1-452b-8859-a16f38792f96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.646164 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvbcw\" (UniqueName: \"kubernetes.io/projected/7a65b49b-3fa1-452b-8859-a16f38792f96-kube-api-access-vvbcw\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.646468 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a65b49b-3fa1-452b-8859-a16f38792f96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927907 4739 generic.go:334] "Generic (PLEG): container finished" podID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" exitCode=0 Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927955 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f"} Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927998 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxkb4" event={"ID":"7a65b49b-3fa1-452b-8859-a16f38792f96","Type":"ContainerDied","Data":"ea2cef7b03a77ba255750eb431c397b926fb1e4142bd2cb031d62aba0eddbe71"} Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.927996 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxkb4" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.928016 4739 scope.go:117] "RemoveContainer" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.955569 4739 scope.go:117] "RemoveContainer" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.973903 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.986994 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qxkb4"] Feb 18 15:28:17 crc kubenswrapper[4739]: I0218 15:28:17.996792 4739 scope.go:117] "RemoveContainer" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.034111 4739 scope.go:117] "RemoveContainer" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" Feb 18 15:28:18 crc kubenswrapper[4739]: E0218 15:28:18.034796 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f\": container with ID starting with 2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f not found: ID does not exist" containerID="2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.034846 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f"} err="failed to get container status \"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f\": rpc error: code = NotFound desc = could not find container \"2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f\": container with ID starting with 2b94baef78c7915d1bbf82acc6c51f5ddbd357292aeeddefa52906ed6d99147f not found: ID does not exist" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.034878 4739 scope.go:117] "RemoveContainer" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" Feb 18 15:28:18 crc kubenswrapper[4739]: E0218 15:28:18.035810 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6\": container with ID starting with fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6 not found: ID does not exist" containerID="fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.035978 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6"} err="failed to get container status \"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6\": rpc error: code = NotFound desc = could not find container \"fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6\": container with ID starting with fc44eaabc4917d395c400e5df36a74094a9680a1a23d1bde203ede209b5ea1f6 not found: ID does not exist" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.036106 4739 scope.go:117] "RemoveContainer" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" Feb 18 15:28:18 crc kubenswrapper[4739]: E0218 15:28:18.036926 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25\": container with ID starting with 15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25 not found: ID does not exist" containerID="15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.036959 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25"} err="failed to get container status \"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25\": rpc error: code = NotFound desc = could not find container \"15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25\": container with ID starting with 15e6df0bf168b594ce69f4bad25545ad0ef71cfe58e1f4512a40e86ac6e23b25 not found: ID does not exist" Feb 18 15:28:18 crc kubenswrapper[4739]: I0218 15:28:18.425721 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" path="/var/lib/kubelet/pods/7a65b49b-3fa1-452b-8859-a16f38792f96/volumes" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.433361 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:40 crc kubenswrapper[4739]: E0218 15:28:40.434303 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434316 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" Feb 18 15:28:40 crc kubenswrapper[4739]: E0218 15:28:40.434366 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-content" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434372 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-content" Feb 18 15:28:40 crc kubenswrapper[4739]: E0218 15:28:40.434384 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-utilities" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434390 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="extract-utilities" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.434608 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a65b49b-3fa1-452b-8859-a16f38792f96" containerName="registry-server" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.441003 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.458141 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.529095 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.529191 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.529255 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.631839 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.631954 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.632030 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.632414 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.632506 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.657462 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"redhat-marketplace-lz2sx\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:40 crc kubenswrapper[4739]: I0218 15:28:40.759737 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:41 crc kubenswrapper[4739]: I0218 15:28:41.349925 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:42 crc kubenswrapper[4739]: I0218 15:28:42.203503 4739 generic.go:334] "Generic (PLEG): container finished" podID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" exitCode=0 Feb 18 15:28:42 crc kubenswrapper[4739]: I0218 15:28:42.203567 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb"} Feb 18 15:28:42 crc kubenswrapper[4739]: I0218 15:28:42.203841 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerStarted","Data":"45ee3a9d84e4d126fe757c663dd3a9c627c41af630679532b55671b085971fca"} Feb 18 15:28:43 crc kubenswrapper[4739]: I0218 15:28:43.228056 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerStarted","Data":"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69"} Feb 18 15:28:44 crc kubenswrapper[4739]: I0218 15:28:44.240670 4739 generic.go:334] "Generic (PLEG): container finished" podID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" exitCode=0 Feb 18 15:28:44 crc kubenswrapper[4739]: I0218 15:28:44.240739 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69"} Feb 18 15:28:45 crc kubenswrapper[4739]: I0218 15:28:45.254811 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerStarted","Data":"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5"} Feb 18 15:28:45 crc kubenswrapper[4739]: I0218 15:28:45.275737 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lz2sx" podStartSLOduration=2.758651493 podStartE2EDuration="5.275711548s" podCreationTimestamp="2026-02-18 15:28:40 +0000 UTC" firstStartedPulling="2026-02-18 15:28:42.206494121 +0000 UTC m=+5354.702215043" lastFinishedPulling="2026-02-18 15:28:44.723554176 +0000 UTC m=+5357.219275098" observedRunningTime="2026-02-18 15:28:45.271861261 +0000 UTC m=+5357.767582203" watchObservedRunningTime="2026-02-18 15:28:45.275711548 +0000 UTC m=+5357.771432480" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.023710 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.030753 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.047770 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.166801 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.166879 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.167030 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270162 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270553 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270615 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.270913 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.271053 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.302331 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"redhat-operators-bkfnv\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.352071 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:46 crc kubenswrapper[4739]: I0218 15:28:46.932097 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:28:47 crc kubenswrapper[4739]: I0218 15:28:47.279562 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" exitCode=0 Feb 18 15:28:47 crc kubenswrapper[4739]: I0218 15:28:47.279659 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe"} Feb 18 15:28:47 crc kubenswrapper[4739]: I0218 15:28:47.279858 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerStarted","Data":"4b9eb25a03d864c06ee955eb56ba0fc1dba7e630ebfafd06afdd34f5de8380c2"} Feb 18 15:28:48 crc kubenswrapper[4739]: I0218 15:28:48.293996 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerStarted","Data":"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e"} Feb 18 15:28:50 crc kubenswrapper[4739]: I0218 15:28:50.760710 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:50 crc kubenswrapper[4739]: I0218 15:28:50.761738 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:50 crc kubenswrapper[4739]: I0218 15:28:50.820350 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:51 crc kubenswrapper[4739]: I0218 15:28:51.383301 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:52 crc kubenswrapper[4739]: I0218 15:28:52.004706 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.356669 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" exitCode=0 Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.356754 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e"} Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.357259 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lz2sx" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" containerID="cri-o://484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" gracePeriod=2 Feb 18 15:28:53 crc kubenswrapper[4739]: I0218 15:28:53.975058 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.068676 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") pod \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.068952 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") pod \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.068990 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") pod \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\" (UID: \"23e26372-bcdf-4d10-ae5e-ae94c5a09f96\") " Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.069321 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities" (OuterVolumeSpecName: "utilities") pod "23e26372-bcdf-4d10-ae5e-ae94c5a09f96" (UID: "23e26372-bcdf-4d10-ae5e-ae94c5a09f96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.069756 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.075602 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d" (OuterVolumeSpecName: "kube-api-access-8lp2d") pod "23e26372-bcdf-4d10-ae5e-ae94c5a09f96" (UID: "23e26372-bcdf-4d10-ae5e-ae94c5a09f96"). InnerVolumeSpecName "kube-api-access-8lp2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.094104 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23e26372-bcdf-4d10-ae5e-ae94c5a09f96" (UID: "23e26372-bcdf-4d10-ae5e-ae94c5a09f96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.172510 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.172560 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lp2d\" (UniqueName: \"kubernetes.io/projected/23e26372-bcdf-4d10-ae5e-ae94c5a09f96-kube-api-access-8lp2d\") on node \"crc\" DevicePath \"\"" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.371544 4739 generic.go:334] "Generic (PLEG): container finished" podID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" exitCode=0 Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.371614 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lz2sx" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.372508 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5"} Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.372614 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lz2sx" event={"ID":"23e26372-bcdf-4d10-ae5e-ae94c5a09f96","Type":"ContainerDied","Data":"45ee3a9d84e4d126fe757c663dd3a9c627c41af630679532b55671b085971fca"} Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.372648 4739 scope.go:117] "RemoveContainer" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.375125 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerStarted","Data":"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45"} Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.410308 4739 scope.go:117] "RemoveContainer" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.434781 4739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bkfnv" podStartSLOduration=2.95510517 podStartE2EDuration="9.43475232s" podCreationTimestamp="2026-02-18 15:28:45 +0000 UTC" firstStartedPulling="2026-02-18 15:28:47.281621531 +0000 UTC m=+5359.777342453" lastFinishedPulling="2026-02-18 15:28:53.761268681 +0000 UTC m=+5366.256989603" observedRunningTime="2026-02-18 15:28:54.412181384 +0000 UTC m=+5366.907902336" watchObservedRunningTime="2026-02-18 15:28:54.43475232 +0000 UTC m=+5366.930473262" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.442104 4739 scope.go:117] "RemoveContainer" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.459072 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.460694 4739 scope.go:117] "RemoveContainer" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" Feb 18 15:28:54 crc kubenswrapper[4739]: E0218 15:28:54.461115 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5\": container with ID starting with 484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5 not found: ID does not exist" containerID="484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461171 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5"} err="failed to get container status \"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5\": rpc error: code = NotFound desc = could not find container \"484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5\": container with ID starting with 484cd76df9876f6f8f12e8625d7b9dc4b1d4a1f421442de35bfdc47b5019bda5 not found: ID does not exist" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461202 4739 scope.go:117] "RemoveContainer" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" Feb 18 15:28:54 crc kubenswrapper[4739]: E0218 15:28:54.461650 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69\": container with ID starting with 085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69 not found: ID does not exist" containerID="085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461749 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69"} err="failed to get container status \"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69\": rpc error: code = NotFound desc = could not find container \"085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69\": container with ID starting with 085cc58b8ca244dc8d4e1e2f215db32ed923a70c69f79a9923c0d4ab8599df69 not found: ID does not exist" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.461816 4739 scope.go:117] "RemoveContainer" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" Feb 18 15:28:54 crc kubenswrapper[4739]: E0218 15:28:54.462203 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb\": container with ID starting with f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb not found: ID does not exist" containerID="f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.462309 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb"} err="failed to get container status \"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb\": rpc error: code = NotFound desc = could not find container \"f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb\": container with ID starting with f7e0a0da20880e88763266894cec0cac9d9aacdb1019d76d16e9bbf915212bcb not found: ID does not exist" Feb 18 15:28:54 crc kubenswrapper[4739]: I0218 15:28:54.478506 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lz2sx"] Feb 18 15:28:56 crc kubenswrapper[4739]: I0218 15:28:56.352984 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:56 crc kubenswrapper[4739]: I0218 15:28:56.353588 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:28:56 crc kubenswrapper[4739]: I0218 15:28:56.437392 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" path="/var/lib/kubelet/pods/23e26372-bcdf-4d10-ae5e-ae94c5a09f96/volumes" Feb 18 15:28:57 crc kubenswrapper[4739]: I0218 15:28:57.411172 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:28:57 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:28:57 crc kubenswrapper[4739]: > Feb 18 15:28:59 crc kubenswrapper[4739]: I0218 15:28:59.372368 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:28:59 crc kubenswrapper[4739]: I0218 15:28:59.373099 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:29:07 crc kubenswrapper[4739]: I0218 15:29:07.407506 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:29:07 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:29:07 crc kubenswrapper[4739]: > Feb 18 15:29:17 crc kubenswrapper[4739]: I0218 15:29:17.862702 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:29:17 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:29:17 crc kubenswrapper[4739]: > Feb 18 15:29:27 crc kubenswrapper[4739]: I0218 15:29:27.732836 4739 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" probeResult="failure" output=< Feb 18 15:29:27 crc kubenswrapper[4739]: timeout: failed to connect service ":50051" within 1s Feb 18 15:29:27 crc kubenswrapper[4739]: > Feb 18 15:29:29 crc kubenswrapper[4739]: I0218 15:29:29.373363 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:29:29 crc kubenswrapper[4739]: I0218 15:29:29.373760 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:29:36 crc kubenswrapper[4739]: I0218 15:29:36.423869 4739 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:36 crc kubenswrapper[4739]: I0218 15:29:36.478637 4739 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:36 crc kubenswrapper[4739]: I0218 15:29:36.662431 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:29:37 crc kubenswrapper[4739]: I0218 15:29:37.857724 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bkfnv" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" containerID="cri-o://bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" gracePeriod=2 Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.398631 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.504738 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") pod \"1ca25b9b-aaec-4d87-aa25-9c003455730c\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.504895 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") pod \"1ca25b9b-aaec-4d87-aa25-9c003455730c\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.504941 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") pod \"1ca25b9b-aaec-4d87-aa25-9c003455730c\" (UID: \"1ca25b9b-aaec-4d87-aa25-9c003455730c\") " Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.505945 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities" (OuterVolumeSpecName: "utilities") pod "1ca25b9b-aaec-4d87-aa25-9c003455730c" (UID: "1ca25b9b-aaec-4d87-aa25-9c003455730c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.514630 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc" (OuterVolumeSpecName: "kube-api-access-bvfdc") pod "1ca25b9b-aaec-4d87-aa25-9c003455730c" (UID: "1ca25b9b-aaec-4d87-aa25-9c003455730c"). InnerVolumeSpecName "kube-api-access-bvfdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.608389 4739 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.608424 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvfdc\" (UniqueName: \"kubernetes.io/projected/1ca25b9b-aaec-4d87-aa25-9c003455730c-kube-api-access-bvfdc\") on node \"crc\" DevicePath \"\"" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.628041 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ca25b9b-aaec-4d87-aa25-9c003455730c" (UID: "1ca25b9b-aaec-4d87-aa25-9c003455730c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.712144 4739 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca25b9b-aaec-4d87-aa25-9c003455730c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.871986 4739 generic.go:334] "Generic (PLEG): container finished" podID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" exitCode=0 Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872036 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45"} Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872064 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkfnv" event={"ID":"1ca25b9b-aaec-4d87-aa25-9c003455730c","Type":"ContainerDied","Data":"4b9eb25a03d864c06ee955eb56ba0fc1dba7e630ebfafd06afdd34f5de8380c2"} Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872085 4739 scope.go:117] "RemoveContainer" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.872275 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkfnv" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.915712 4739 scope.go:117] "RemoveContainer" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.923321 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.933226 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bkfnv"] Feb 18 15:29:38 crc kubenswrapper[4739]: I0218 15:29:38.936260 4739 scope.go:117] "RemoveContainer" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.009041 4739 scope.go:117] "RemoveContainer" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" Feb 18 15:29:39 crc kubenswrapper[4739]: E0218 15:29:39.009793 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45\": container with ID starting with bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45 not found: ID does not exist" containerID="bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.009836 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45"} err="failed to get container status \"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45\": rpc error: code = NotFound desc = could not find container \"bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45\": container with ID starting with bde0c3d0c4b0b4b8de65cc5ca3b4e01f3e620979a975fac81a76146527227f45 not found: ID does not exist" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.009862 4739 scope.go:117] "RemoveContainer" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" Feb 18 15:29:39 crc kubenswrapper[4739]: E0218 15:29:39.010313 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e\": container with ID starting with 31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e not found: ID does not exist" containerID="31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.010340 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e"} err="failed to get container status \"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e\": rpc error: code = NotFound desc = could not find container \"31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e\": container with ID starting with 31c783928a66cc026685df40c12da5f960a835ef0d00c60d1a96d1d06a7fea3e not found: ID does not exist" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.010355 4739 scope.go:117] "RemoveContainer" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" Feb 18 15:29:39 crc kubenswrapper[4739]: E0218 15:29:39.010683 4739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe\": container with ID starting with 7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe not found: ID does not exist" containerID="7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe" Feb 18 15:29:39 crc kubenswrapper[4739]: I0218 15:29:39.010746 4739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe"} err="failed to get container status \"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe\": rpc error: code = NotFound desc = could not find container \"7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe\": container with ID starting with 7fb308b1d103699fc5a573203a90dedaeb074f82c32a3e99777ea6cf1682f2fe not found: ID does not exist" Feb 18 15:29:40 crc kubenswrapper[4739]: I0218 15:29:40.423167 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" path="/var/lib/kubelet/pods/1ca25b9b-aaec-4d87-aa25-9c003455730c/volumes" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.372842 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.375216 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.375299 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.376209 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:29:59 crc kubenswrapper[4739]: I0218 15:29:59.376270 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d" gracePeriod=600 Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116113 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d" exitCode=0 Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116182 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d"} Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116955 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerStarted","Data":"8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396"} Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.116980 4739 scope.go:117] "RemoveContainer" containerID="89e41d197c61407413d36ac73c98da1ddc1743ad221f5d397b61cfbd1c309400" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.204688 4739 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r"] Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205391 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205419 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205463 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205474 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205522 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205538 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205566 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205578 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205602 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205613 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-content" Feb 18 15:30:00 crc kubenswrapper[4739]: E0218 15:30:00.205644 4739 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.205655 4739 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="extract-utilities" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.206001 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca25b9b-aaec-4d87-aa25-9c003455730c" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.206058 4739 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e26372-bcdf-4d10-ae5e-ae94c5a09f96" containerName="registry-server" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.207230 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.210361 4739 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.211286 4739 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.219397 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r"] Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.341367 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.341624 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.341979 4739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.445379 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.445545 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.445647 4739 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.446438 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.454363 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.465328 4739 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"collect-profiles-29523810-lzp2r\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:00 crc kubenswrapper[4739]: I0218 15:30:00.527009 4739 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:01 crc kubenswrapper[4739]: W0218 15:30:01.041510 4739 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d5496ac_13c6_454a_8f4f_c5d40c7bf53f.slice/crio-fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d WatchSource:0}: Error finding container fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d: Status 404 returned error can't find the container with id fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d Feb 18 15:30:01 crc kubenswrapper[4739]: I0218 15:30:01.048476 4739 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r"] Feb 18 15:30:01 crc kubenswrapper[4739]: I0218 15:30:01.146719 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerStarted","Data":"fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d"} Feb 18 15:30:02 crc kubenswrapper[4739]: I0218 15:30:02.161796 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerStarted","Data":"de46eaa47d983bbe7229ec06638adabaf278d0bae33499bd90a6086fc101208a"} Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.176049 4739 generic.go:334] "Generic (PLEG): container finished" podID="5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" containerID="de46eaa47d983bbe7229ec06638adabaf278d0bae33499bd90a6086fc101208a" exitCode=0 Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.176114 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerDied","Data":"de46eaa47d983bbe7229ec06638adabaf278d0bae33499bd90a6086fc101208a"} Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.596118 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.753673 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") pod \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.753792 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") pod \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.753840 4739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") pod \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\" (UID: \"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f\") " Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.756094 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume" (OuterVolumeSpecName: "config-volume") pod "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" (UID: "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.762686 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6" (OuterVolumeSpecName: "kube-api-access-hl9l6") pod "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" (UID: "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f"). InnerVolumeSpecName "kube-api-access-hl9l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.763044 4739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f" (UID: "5d5496ac-13c6-454a-8f4f-c5d40c7bf53f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.857643 4739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl9l6\" (UniqueName: \"kubernetes.io/projected/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-kube-api-access-hl9l6\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.857683 4739 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:03 crc kubenswrapper[4739]: I0218 15:30:03.857694 4739 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d5496ac-13c6-454a-8f4f-c5d40c7bf53f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.189699 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" event={"ID":"5d5496ac-13c6-454a-8f4f-c5d40c7bf53f","Type":"ContainerDied","Data":"fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d"} Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.189745 4739 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbd9006e1cd3aedf14fb9ab9024ae6ea5378d6e9066732547481a19183fa918d" Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.189811 4739 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523810-lzp2r" Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.681939 4739 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 15:30:04 crc kubenswrapper[4739]: I0218 15:30:04.694716 4739 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523765-q4ltc"] Feb 18 15:30:06 crc kubenswrapper[4739]: I0218 15:30:06.423263 4739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d759be05-a3d9-4dd0-b360-dc1f752b84be" path="/var/lib/kubelet/pods/d759be05-a3d9-4dd0-b360-dc1f752b84be/volumes" Feb 18 15:30:39 crc kubenswrapper[4739]: I0218 15:30:39.620282 4739 scope.go:117] "RemoveContainer" containerID="aa9ba9ec1d52c3700b6b7f0b25f14494ecf423b123e22d781f5b92c7a26b7e48" Feb 18 15:31:59 crc kubenswrapper[4739]: I0218 15:31:59.372905 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:31:59 crc kubenswrapper[4739]: I0218 15:31:59.373511 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:32:29 crc kubenswrapper[4739]: I0218 15:32:29.372649 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:32:29 crc kubenswrapper[4739]: I0218 15:32:29.373073 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.373250 4739 patch_prober.go:28] interesting pod/machine-config-daemon-mc7b4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.373943 4739 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.374008 4739 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.375052 4739 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396"} pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 15:32:59 crc kubenswrapper[4739]: I0218 15:32:59.375121 4739 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerName="machine-config-daemon" containerID="cri-o://8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" gracePeriod=600 Feb 18 15:32:59 crc kubenswrapper[4739]: E0218 15:32:59.533514 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.263089 4739 generic.go:334] "Generic (PLEG): container finished" podID="947a1bc9-4557-4cd9-aa90-9d3893aad914" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" exitCode=0 Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.263152 4739 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" event={"ID":"947a1bc9-4557-4cd9-aa90-9d3893aad914","Type":"ContainerDied","Data":"8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396"} Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.263208 4739 scope.go:117] "RemoveContainer" containerID="fe8593c5c5f5083dfa905ea7aa460cd337f7eb49309e21cc20ce89f16076db9d" Feb 18 15:33:00 crc kubenswrapper[4739]: I0218 15:33:00.264078 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:00 crc kubenswrapper[4739]: E0218 15:33:00.264567 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:14 crc kubenswrapper[4739]: I0218 15:33:14.410628 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:14 crc kubenswrapper[4739]: E0218 15:33:14.411437 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:29 crc kubenswrapper[4739]: I0218 15:33:29.410288 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:29 crc kubenswrapper[4739]: E0218 15:33:29.411406 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:41 crc kubenswrapper[4739]: I0218 15:33:41.411136 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:41 crc kubenswrapper[4739]: E0218 15:33:41.412124 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:33:53 crc kubenswrapper[4739]: I0218 15:33:53.411778 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:33:53 crc kubenswrapper[4739]: E0218 15:33:53.412841 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:07 crc kubenswrapper[4739]: I0218 15:34:07.411347 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:07 crc kubenswrapper[4739]: E0218 15:34:07.412327 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:20 crc kubenswrapper[4739]: I0218 15:34:20.412025 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:20 crc kubenswrapper[4739]: E0218 15:34:20.413200 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:32 crc kubenswrapper[4739]: I0218 15:34:32.411976 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:32 crc kubenswrapper[4739]: E0218 15:34:32.413184 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:45 crc kubenswrapper[4739]: I0218 15:34:45.411547 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:45 crc kubenswrapper[4739]: E0218 15:34:45.412533 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:34:59 crc kubenswrapper[4739]: I0218 15:34:59.411070 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:34:59 crc kubenswrapper[4739]: E0218 15:34:59.412206 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914" Feb 18 15:35:11 crc kubenswrapper[4739]: I0218 15:35:11.410974 4739 scope.go:117] "RemoveContainer" containerID="8ac4e9929fafaf304737ec23b6c4e7f64b6a4496616c2e375e255ac768444396" Feb 18 15:35:11 crc kubenswrapper[4739]: E0218 15:35:11.411818 4739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mc7b4_openshift-machine-config-operator(947a1bc9-4557-4cd9-aa90-9d3893aad914)\"" pod="openshift-machine-config-operator/machine-config-daemon-mc7b4" podUID="947a1bc9-4557-4cd9-aa90-9d3893aad914"